All tags
Topic: "cpu-offloading"
not much happened today
vllm deepseek-v3 llamaindex openai deepseek qdrant twilio llamaindex elevenlabs training-efficiency parallelism cpu-offloading gradient-descent mixture-of-experts fp8-precision memory-optimization ai-voice-assistants coding-assistants document-processing version-control learning-rate-schedules federated-learning agentic-systems multi-agent-systems deliberative-alignment chain-of-thought on-device-ai multimodality francois-fleuret daniel-hanchen aaron-defazio fchollet elad-gil wojciech-zaremba richard-socher
ChatGPT, Sora, and the OpenAI API experienced a >5 hour outage but are now restored. Updates to vLLM enable DeepSeek-V3 to run with enhanced parallelism and CPU offloading, improving model deployment flexibility. Discussions on gradient descent in top-k routing MoE and adoption of FP8 precision focus on training efficiency and memory optimization. AIDE, an AI voice medical assistant by Team Therasync, leverages Qdrant, OpenAI, and Twilio. DeepSeek-Engineer offers AI-powered coding assistance with structured outputs. LlamaIndex integrates LlamaCloud and ElevenLabs for large-scale document processing and voice interaction. Insights on version control with ghstack and advocacy for linear decay learning rate schedules highlight best practices in AI development. Experts predict smaller, tighter models, true multimodal models, and on-device AI in 2025. Proposals for planetary-scale federated learning and community AGI moonshots emphasize future AI directions. Discussions on agentic systems, multi-agent workflows, and deliberative alignment through chain of thought reasoning underscore AI safety and alignment efforts.
FSDP+QLoRA: the Answer to 70b-scale AI for desktop class GPUs
qlora fsdp inflection-2.5 gpt-4 answer.ai hugging-face meta-ai-fair nvidia inflectionai model-training quantization memory-optimization gradient-checkpointing cpu-offloading fine-tuning model-sharding reinforcement-learning chain-of-thought benchmarking jeremy_howard tim_dettmers yann_lecun
Jeremy Howard and collaborators released a new tool combining FSDP, QLoRA, and HQQ to enable training 70b-parameter models on affordable consumer GPUs like RTX 4090s with only 24GB RAM, overcoming traditional memory constraints that required expensive data center GPUs costing over $150k. The approach shards quantized models across multiple GPUs and uses techniques like gradient checkpointing and CPU offloading to achieve efficient training on desktop-class hardware. The blogpost details challenges and solutions integrating these methods, highlighting a significant cost reduction from $150k to under $2.5k for training large language models. Additionally, Twitter recaps mention Inflection AI's Inflection-2.5 model rivaling GPT-4 in benchmarks with less compute, and Grok improving speed by 3x. Yann LeCun discusses multi-step reasoning training for LLMs.