All tags
Person: "aymericroucher"
not much happened today
vllm-0.20.0 poolside-laguna-xs.2 ling-2.6-flash nemotron-3-nano-omni qwen-3.5 vllm poolside nvidia opensrouter lmstudio ollama unsloth fal fireworks deepinfra togethercompute baseten canonical memory-optimization mixture-of-experts model-optimization inference-speed quantization model-deployment multimodality hardware-optimization model-benchmarking open-models agentic-ai jeremyphoward maharshii teortaxestex aymericroucher piotrz
vLLM v0.20.0 introduces significant improvements in memory and MoE serving efficiency, including TurboQuant 2-bit KV cache for 4× KV capacity and a 2.1% latency improvement. The update supports multiple hardware platforms like DeepSeek V4 MegaMoE on Blackwell, Jetson Thor, ROCm, Intel XPU, and Grace-Blackwell setups. Early benchmarks show DeepSeek V4 Pro on B300 hardware can be up to 8× faster than H200. The ecosystem is rapidly adopting day-0 support for new open models such as Poolside Laguna XS.2, Ling-2.6-flash, and NVIDIA Nemotron 3 Nano Omni.
Poolside released Laguna XS.2, a 33B total / 3B active MoE coding model under Apache 2.0, capable of running on a single GPU, with hybrid attention and FP8 KV cache, performing near Qwen-3.5.
NVIDIA launched Nemotron 3 Nano Omni, a 30B / A3B multimodal MoE with 256K context, supporting text, image, video, audio, and documents, with immediate distribution across multiple platforms. Discussions highlighted tradeoffs in quantization methods and a shift away from CUDA lock-in towards heterogeneous accelerator support.
not much happened today
gpt-4.5 claude-3.7-sonnet deepseek-r1 smolagents-codeagent gpt-4o llama-3-8b tinyr1-32b-preview r1-searcher forgetting-transformer nanomoe openai deepseek hugging-face mixture-of-experts reinforcement-learning kv-cache-compression agentic-ai model-distillation attention-mechanisms model-compression minimax model-pretraining andrej-karpathy cwolferesearch aymericroucher teortaxestex jonathanross321 akhaliq
The AI news recap highlights several key developments: nanoMoE, a PyTorch implementation of a mid-sized Mixture-of-Experts (MoE) model inspired by Andrej Karpathy's nanoGPT, enables pretraining on commodity hardware within a week. An agentic leaderboard ranks LLMs powering smolagents CodeAgent, with GPT-4.5 leading, followed by Claude-3.7-Sonnet. Discussions around DeepSeek-R1 emphasize AI model commoditization, with DeepSeek dubbed the "OpenAI of China." Q-Filters offer a training-free method for KV cache compression in autoregressive models, achieving 32x compression with minimal perplexity loss. The PokéChamp minimax language agent, powered by GPT-4o and Llama-3-8b, demonstrates strong performance in Pokémon battles. Other notable models include TinyR1-32B-Preview with Branch-Merge Distillation, R1-Searcher incentivizing search capability via reinforcement learning, and the Forgetting Transformer using a Forget Gate in softmax attention. These advancements reflect ongoing innovation in model architectures, compression, reinforcement learning, and agentic AI.
not much happened today
deepseek-r1 alphageometry-2 claude deepseek openai google-deepmind anthropic langchain adyen open-source reasoning agentic-ai javascript model-release memes ai-development benchmarking akhaliq lmthang aymericroucher vikhyatk swyx
DeepSeek-R1 surpasses OpenAI in GitHub stars, marking a milestone in open-source AI with rapid growth in community interest. AlphaGeometry2 achieves gold-medalist level performance with an 84% solving rate on IMO geometry problems, showcasing significant advancements in AI reasoning. LangChain releases a tutorial for building AI agents in JavaScript, enhancing developer capabilities in agent deployment. Reflections on Anthropic's Claude model reveal early access and influence on AI development timelines. Lighthearted AI humor includes calls to ban second-order optimizers and challenges in web development longevity. The AI Engineer Summit 2025 workshops were announced, continuing community engagement and education.