All tags
Topic: "vram-optimization"
Ring Attention for >1M Context
gemini-pro gemma-7b gemma-2b deepseek-coder-6.7b-instruct llama-cpp google cuda-mode nvidia polymind deepseek ollama runpod lmstudio long-context ringattention pytorch cuda llm-guessing-game chatbots retrieval-augmented-generation vram-optimization fine-tuning dynamic-prompt-optimization ml-workflows gpu-scaling model-updates liu zaharia abbeel
Google Gemini Pro has sparked renewed interest in long context capabilities. The CUDA MODE Discord is actively working on implementing the RingAttention paper by Liu, Zaharia, and Abbeel, including extensions from the World Model RingAttention paper, with available PyTorch and CUDA implementations. TheBloke Discord discussed various topics including LLM guessing game evaluation, chatbot UX comparisons between Nvidia's Chat with RTX and Polymind, challenges in retrieval-augmented generation (RAG) integration, VRAM optimization, fine-tuning for character roleplay using Dynamic Prompt Optimization (DPO), and model choices like deepseek-coder-6.7B-instruct. There was also discussion on ML workflows on Mac Studio, with preferences for llama.cpp over ollama, and scaling inference cost-effectively using GPUs like the 4090 on Runpod. LM Studio users face manual update requirements for version 0.2.16, which includes support for Gemma models and bug fixes, especially for MacOS. The Gemma 7B model has had performance issues, while Gemma 2B received positive feedback.
Gemini Ultra is out, to mixed reviews
gemini-ultra gemini-advanced solar-10.7b openhermes-2.5-mistral-7b subformer billm google openai mistral-ai hugging-face multi-gpu-support training-data-contamination model-merging model-alignment listwise-preference-optimization high-performance-computing parameter-sharing post-training-quantization dataset-viewer gpu-scheduling fine-tuning vram-optimization
Google released Gemini Ultra as a paid tier for "Gemini Advanced with Ultra 1.0" following the discontinuation of Bard. Reviews noted it is "slightly faster/better than ChatGPT" but with reasoning gaps. The Steam Deck was highlighted as a surprising AI workstation capable of running models like Solar 10.7B. Discussions in AI communities covered topics such as multi-GPU support for OSS Unsloth, training data contamination from OpenAI outputs, ethical concerns over model merging, and new alignment techniques like Listwise Preference Optimization (LiPO). The Mojo programming language was praised for high-performance computing. In research, the Subformer model uses sandwich-style parameter sharing and SAFE for efficiency, and BiLLM introduced 1-bit post-training quantization to reduce resource use. The OpenHermes dataset viewer tool was launched, and GPU scheduling with Slurm was discussed. Fine-tuning challenges for models like OpenHermes-2.5-Mistral-7B and VRAM requirements were also topics of interest.
Less Lazy AI
hamster-v0.2 flan-t5 miqu-1-120b-gguf qwen2 axolotl openai hugging-face nous-research h2oai apple model-merging fine-tuning quantization vram-optimization plugin-development chatbot-memory model-training bug-reporting api-compatibility philschmid
The AI Discord summaries for early 2024 cover various community discussions and developments. Highlights include 20 guilds, 308 channels, and 10449 messages analyzed, saving an estimated 780 minutes of reading time. Key topics include Polymind Plugin Puzzle integrating PubMed API, roleplay with HamSter v0.2, VRAM challenges in Axolotl training, fine-tuning tips for FLAN-T5, and innovative model merging strategies. The Nous Research AI community discussed GPT-4's lyricism issues, quantization techniques using
llama.cpp
, frankenmerging with models like miqu-1-120b-GGUF, anticipation for Qwen2, and tools like text-generation-webui
and ExLlamaV2. The LM Studio community reported a bug where the app continues running after UI closure, with a workaround to forcibly terminate the process. These discussions reflect ongoing challenges and innovations in AI model training, deployment, and interaction.