All tags
Person: "guillaumelample"
The Quiet Rise of Claude Code vs Codex
mistral-small-3.2 qwen3-0.6b llama-3-1b gemini-2.5-flash-lite gemini-app magenta-real-time apple-3b-on-device mistral-ai hugging-face google-deepmind apple artificial-analysis kuaishou instruction-following function-calling model-implementation memory-efficiency 2-bit-quantization music-generation video-models benchmarking api reach_vb guillaumelample qtnx_ shxf0072 rasbt demishassabis artificialanlys osanseviero
Claude Code is gaining mass adoption, inspiring derivative projects like OpenCode and ccusage, with discussions ongoing in AI communities. Mistral AI released Mistral Small 3.2, a 24B parameter model update improving instruction following and function calling, available on Hugging Face and supported by vLLM. Sebastian Raschka implemented Qwen3 0.6B from scratch, noting its deeper architecture and memory efficiency compared to Llama 3 1B. Google DeepMind showcased Gemini 2.5 Flash-Lite's UI code generation from visual context and added video upload support in the Gemini App. Apple's new 3B parameter on-device foundation model was benchmarked, showing slower speed but efficient memory use via 2-bit quantization, suitable for background tasks. Google DeepMind also released Magenta Real-time, an 800M parameter music generation model licensed under Apache 2.0, marking Google's 1000th model on Hugging Face. Kuaishou launched KLING 2.1, a new video model accessible via API.
not much happened today
deepseek-r1-0528 pali-gemma-2 gemma-3 shieldgemma-2 txgemma gemma-3-qat gemma-3n-preview medgemma dolphingemma signgemma claude-4 opus-4 claude-sonnet-4 codestral-embed bagel qwen nemotron-cortexa gemini-2.5-pro deepseek-ai huggingface gemma claude bytedance qwen nemotron sakana-ai-labs benchmarking model-releases multimodality code-generation model-performance long-context reinforcement-learning model-optimization open-source yuchenj_uw _akhaliq clementdelangue osanseviero alexalbert__ guillaumelample theturingpost lmarena_ai epochairesearch scaling01 nrehiew_ ctnzr
DeepSeek R1 v2 model released with availability on Hugging Face and inference partners. The Gemma model family continues prolific development including PaliGemma 2, Gemma 3, and others. Claude 4 and its variants like Opus 4 and Claude Sonnet 4 show top benchmark performance, including new SOTA on ARC-AGI-2 and WebDev Arena. Codestral Embed introduces a 3072-dimensional code embedder. BAGEL, an open-source multimodal model by ByteDance, supports reading, reasoning, drawing, and editing with long mixed contexts. Benchmarking highlights include Nemotron-CORTEXA topping SWEBench and Gemini 2.5 Pro performing on VideoGameBench. Discussions on random rewards effectiveness focus on Qwen models. "Opus 4 NEW SOTA ON ARC-AGI-2. It's happening - I was right" and "Claude 4 launch has dev moving at a different pace" reflect excitement in the community.
o1 destroys Lmsys Arena, Qwen 2.5, Kyutai Moshi release
o1-preview o1-mini qwen-2.5 qwen-plus llama-3-1 deepseek-v2.5 openai anthropic google alibaba deepseek kyutai weights-biases mistral-ai chain-of-thought multimodality model-benchmarking model-performance streaming-neural-architecture llm-observability experiment-tracking rate-limiting sama guillaumelample
OpenAI's o1-preview model has achieved a milestone by fully matching top daily AI news stories without human intervention, consistently outperforming other models like Anthropic, Google, and Llama 3 in vibe check evaluations. OpenAI models dominate the top 4 slots on LMsys benchmarks, with rate limits increasing to 500-1000 requests per minute. In open source, Alibaba's Qwen 2.5 suite surpasses Llama 3.1 at the 70B scale and updates its closed Qwen-Plus models to outperform DeepSeek V2.5 but still lag behind leading American models. Kyutai Moshi released its open weights realtime voice model featuring a unique streaming neural architecture with an "inner monologue." Weights & Biases introduced Weave, an LLM observability toolkit that enhances experiment tracking and evaluation, turning prompting into a more scientific process. The news also highlights upcoming events like the WandB LLM-as-judge hackathon in San Francisco. "o1-preview consistently beats out our vibe check evals" and "OpenAI models are gradually raising rate limits by the day."
not much happened today
gpt-4-0613 gpt-3.5-turbo-0613 gpt-4o-2024-08-06 mistral-large-2 gpt4-turbo claude-3-opus idefics3-llama bigllama-3.1-1t-instruct llama-3-120b-instruct openai mistral-ai meta-ai-fair structured-outputs function-calling json-schema benchmarking multimodality context-windows model-scaling ai-hardware vision speech-processing robotics ai-regulation sama rohanpaul_ai corbtt guillaumelample mervenoyann maximelabonne aidan_mclau adcock_brett ylecun
OpenAI introduced structured outputs in their API with a new "strict" mode and a "response_format" parameter, supporting models like gpt-4-0613, gpt-3.5-turbo-0613, and the new gpt-4o-2024-08-06. They also halved the price of gpt-4o to $2.50 per million tokens. Mistral Large 2 outperforms gpt4-turbo and claude-3-opus on hard benchmarks and coding tasks. Idefics3-Llama offers multimodal capabilities with a 10k token context window. BigLlama-3.1-1T-Instruct is an upscaled version of llama-3-120b-instruct. New benchmark "big_model_smell" measures creativity and reliability. Figure 02 robot features advanced AI hardware with onboard vision language model, enhanced battery, and speech-to-speech reasoning. Yann LeCun expressed concerns about California's SB1047 regulation.