AINews
subscribe / issues / tags /

AINews

by smol.ai

How over 50k top AI Engineers keep up, every weekday.

Thanks for subscribing! Please check your email to confirm your subscription.

We respect your privacy. Full signup link here.

We summarize top AI discords + AI reddits + AI X/Twitters, and send you a roundup each day!

"Highest-leverage 45 mins I spend everyday" - Soumith

" best AI newsletter atm " and " I'm not sure that enough people subscribe " - Andrej

"genuinely incredible" - Chris

"surprisingly decent" - Hamel

You can pay for a customizable version here . Thanks to Pieter Levels for the Lex Fridman feature!

Last 30 days in AI

Invalid regex
See all issues
  • Jun 10
    Apple exposes Foundation Models API and... no new Siri
    chatgpt apple openai langchain llamaindex on-device-ai foundation-models reasoning reinforcement-learning voice translation software-automation agentic-workflows gdb scaling01 giffmana kevinweil
    Apple released on-device foundation models for iOS developers, though their recent "Illusion of Reasoning" paper faced significant backlash for flawed methodology regarding LLM reasoning. OpenAI updated ChatGPT's Advanced Voice Mode with more natural voice and improved translation, demonstrated by Greg Brockman. LangChain and LlamaIndex launched new AI agents and tools, including a SWE Agent for software automation and an Excel agent using reinforcement learning for data transformation. The AI community engaged in heated debate over reasoning capabilities of LLMs, highlighting challenges in evaluation methods.
  • Jun 06
    not much happened today
    dots-llm1 qwen3-235b xiaohongshu rednote-hilab deepseek huggingface mixture-of-experts open-source model-benchmarking fine-tuning inference context-windows training-data model-architecture model-performance model-optimization
    China's Xiaohongshu (Rednote) released dots.llm1, a 142B parameter open-source Mixture-of-Experts (MoE) language model with 14B active parameters and a 32K context window, pretrained on 11.2 trillion high-quality, non-synthetic tokens. The model supports efficient inference frameworks like Docker, HuggingFace, and vLLM, and provides intermediate checkpoints every 1 trillion tokens, enabling flexible fine-tuning. Benchmarking claims it slightly surpasses Qwen3 235B on MMLU, though some concerns exist about benchmark selection and synthetic data verification. The release is notable for its truly open-source licensing and no synthetic data usage, sparking community optimism for support in frameworks such as llama.cpp and mlx.
  • Jun 05
    Gemini 2.5 Pro (06-05) launched at AI Engineer World's Fair
    gemini-2.5-pro qwen3-embedding-8b openthinker3-7b google qwen lighton morph-labs openai nvidia benchmarking reasoning coding math embedding-models late-interaction dataset-release model-performance model-architecture ai-conferences greg_brockman jensen_huang christian_szegedy swyx
    At the second day of AIE, Google's Gemini 2.5 Pro reclaimed the top spot on the LMArena leaderboard with a score of 1470 and a +24 Elo increase, showing improvements in coding, reasoning, and math. Qwen3 released state-of-the-art embedding and reranking models, with Qwen3-Embedding-8B topping the MTEB multilingual leaderboard. OpenThinker3-7B emerged as the top open reasoning model trained on the OpenThoughts3-1.2M dataset, outperforming previous models by 33%. LightOn introduced FastPlaid, achieving up to a 554% speedup for late-interaction models. Morph Labs hired Christian Szegedy as Chief Scientist to lead Verified Superintelligence development. The AI Engineer World's Fair featured a fireside chat with Greg Brockman and NVIDIA CEO Jensen Huang, highlighting the return of basic research and engineering best practices.
  • Jun 04
    AI Engineer World's Fair Talks Day 1
    gemini-2.5 gemma claude-code mistral cursor anthropic openai aie google-deepmind meta-ai-fair agent-based-architecture open-source model-memorization scaling-laws quantization mixture-of-experts language-model-memorization model-generalization langgraph model-architecture
    Mistral launched a new Code project, and Cursor released version 1.0. Anthropic improved Claude Code plans, while ChatGPT announced expanded connections. The day was dominated by AIE keynotes and tracks including GraphRAG, RecSys, and Tiny Teams. On Reddit, Google open-sourced the DeepSearch stack for building AI agents with Gemini 2.5 and LangGraph, enabling flexible agent architectures and integration with local LLMs like Gemma. A new Meta paper analyzed language model memorization, showing GPT-style transformers store about 3.5–4 bits/parameter and exploring the transition from memorization to generalization, with implications for Mixture-of-Experts models and quantization effects.
  • Jun 03
    not much happened today
    codex claude-4-opus claude-4-sonnet gemini-2.5-pro gemini-2.5 qwen-2.5-vl qwen-3 playdiffusion openai anthropic google perplexity-ai bing playai suno hugging-face langchain-ai qwen mlx assemblyai llamacloud fine-tuning model-benchmarking text-to-video agentic-ai retrieval-augmented-generation open-source-models speech-editing audio-processing text-to-speech ultra-low-latency multimodality public-notebooks sama gdb kevinweil lmarena_ai epochairesearch reach_vb wightmanr deeplearningai mervenoyann awnihannun jordirib1 aravsrinivas omarsar0 lioronai jerryjliu0 nerdai tonywu_71 _akhaliq clementdelangue _mfelfel
    OpenAI rolled out Codex to ChatGPT Plus users with internet access and fine-grained controls, improving memory features for free users. Anthropic's Claude 4 Opus and Sonnet models lead coding benchmarks, while Google's Gemini 2.5 Pro and Flash models gain recognition with new audio capabilities. Qwen 2.5-VL and Qwen 3 quantizations are noted for versatility and support. Bing Video Creator launched globally enabling text-to-video generation, and Perplexity Labs sees increased demand for travel search. New agentic AI tools and RAG innovations include LlamaCloud and FedRAG. Open-source releases include Holo-1 for web navigation and PlayAI's PlayDiffusion for speech editing. Audio and multimodal advances feature Suno's music editing upgrades, Google's native TTS in 24+ languages, and Universal Streaming's ultra-low latency speech-to-text. Google NotebookLM now supports public notebooks. "Codex's internet access brings tradeoffs, with explicit warnings about risk" and "Gemini 2.5 Pro is cited as a daily driver by users".
  • Jun 02
    not much happened today
    deepseek-r1-0528 o3 gemini-2.5-pro claude-opus-4 deepseek_ai openai gemini meta-ai-fair anthropic x-ai ollama hugging-face alibaba bytedance xiaomi reasoning reinforcement-learning benchmarking quantization local-inference model-evaluation open-weights transparency post-training agentic-benchmarks long-context hallucination-detection teortaxestex wenfeng danielhanchen awnihannun reach_vb abacaj
    DeepSeek R1-0528 release brings major improvements in reasoning, hallucination reduction, JSON output, and function calling, matching or surpassing closed models like OpenAI o3 and Gemini 2.5 Pro on benchmarks such as Artificial Analysis Intelligence Index, LiveBench, and GPQA Diamond. The model ranks #2 globally in open weights intelligence, surpassing Meta AI, Anthropic, and xAI. Open weights and technical transparency have fueled rapid adoption across platforms like Ollama and Hugging Face. Chinese AI labs including DeepSeek, Alibaba, ByteDance, and Xiaomi now match or surpass US labs in model releases and intelligence, driven by open weights strategies. Reinforcement learning post-training is critical for intelligence gains, mirroring trends seen at OpenAI. Optimized quantization techniques (1-bit, 4-bit) and local inference enable efficient experimentation on consumer hardware. New benchmarks like LisanBench test knowledge, planning, memory, and long-context reasoning, with OpenAI o3 and Claude Opus 4 leading. Discussions highlight concerns about benchmark contamination and overemphasis on RL-tuned gains.
  • May 31
    Mary Meeker is so back: BOND Capital AI Trends report
    qwen-3-8b anthropic hugging-face deepseek attention-mechanisms inference arithmetic-intensity transformers model-optimization interpretability model-quantization training tri_dao fleetwood___ teortaxestex awnihannun lateinteraction neelnanda5 eliebakouch _akhaliq
    Mary Meeker returns with a comprehensive 340-slide report on the state of AI, highlighting accelerating tech cycles, compute growth, and comparisons of ChatGPT to early Google and other iconic tech products. The report also covers enterprise traction and valuation of major AI companies. On Twitter, @tri_dao discusses an "ideal" inference architecture featuring attention variants like GTA, GLA, and DeepSeek MLA with high arithmetic intensity (~256), improving efficiency and model quality. Other highlights include the release of 4-bit DWQ of DSR1 Qwen3 8B on Hugging Face, AnthropicAI's open-source interpretability tools for LLMs, and discussions on transformer training and abstractions by various researchers.
  • May 29
    DeepSeek-R1-0528 - Gemini 2.5 Pro-level model, SOTA Open Weights release
    deepseek-r1-0528 gemini-2.5-pro qwen-3-8b qwen-3-235b deepseek-ai anthropic meta-ai-fair nvidia alibaba google-deepmind reinforcement-learning benchmarking model-performance open-weights reasoning quantization post-training model-comparison artificialanlys scaling01 cline reach_vb zizhpan andrewyng teortaxestex teknim1 lateinteraction abacaj cognitivecompai awnihannun
    DeepSeek R1-0528 marks a significant upgrade, closing the gap with proprietary models like Gemini 2.5 Pro and surpassing benchmarks from Anthropic, Meta, NVIDIA, and Alibaba. This Chinese open-weights model leads in several AI benchmarks, driven by reinforcement learning post-training rather than architecture changes, and demonstrates increased reasoning token usage (23K tokens per question). The China-US AI race intensifies as Chinese labs accelerate innovation through transparency and open research culture. Key benchmarks include AIME 2024, LiveCodeBench, and GPQA Diamond.
  • May 28
    not much happened today
    deepseek-r1-0528 pali-gemma-2 gemma-3 shieldgemma-2 txgemma gemma-3-qat gemma-3n-preview medgemma dolphingemma signgemma claude-4 opus-4 claude-sonnet-4 codestral-embed bagel qwen nemotron-cortexa gemini-2.5-pro deepseek-ai huggingface gemma claude bytedance qwen nemotron sakana-ai-labs benchmarking model-releases multimodality code-generation model-performance long-context reinforcement-learning model-optimization open-source yuchenj_uw _akhaliq clementdelangue osanseviero alexalbert__ guillaumelample theturingpost lmarena_ai epochairesearch scaling01 nrehiew_ ctnzr
    DeepSeek R1 v2 model released with availability on Hugging Face and inference partners. The Gemma model family continues prolific development including PaliGemma 2, Gemma 3, and others. Claude 4 and its variants like Opus 4 and Claude Sonnet 4 show top benchmark performance, including new SOTA on ARC-AGI-2 and WebDev Arena. Codestral Embed introduces a 3072-dimensional code embedder. BAGEL, an open-source multimodal model by ByteDance, supports reading, reasoning, drawing, and editing with long mixed contexts. Benchmarking highlights include Nemotron-CORTEXA topping SWEBench and Gemini 2.5 Pro performing on VideoGameBench. Discussions on random rewards effectiveness focus on Qwen models. "Opus 4 NEW SOTA ON ARC-AGI-2. It's happening - I was right" and "Claude 4 launch has dev moving at a different pace" reflect excitement in the community.
  • May 27
    Mistral's Agents API and the 2025 LLM OS
    qwen claude-4 chatgpt o3 o4 mistral-ai langchain-ai openai meta-ai-fair agent-frameworks multi-agent-systems tool-use code-execution web-search model-context-protocol persistent-memory function-calling open-source no-code reinforcement-learning model-performance agent-orchestration omarsar0 simonw swyx scaling01
    The LLM OS concept has evolved since 2023, with Mistral AI releasing a new Agents API that includes code execution, web search, persistent memory, and agent orchestration. LangChainAI introduced the Open Agent Platform (OAP), an open-source no-code platform for intelligent agents. OpenAI plans to develop ChatGPT into a super-assistant by H1 2025, competing with Meta. Discussions around Qwen models focus on reinforcement learning effects, while Claude 4 performance is also noted. The AI Engineer World's Fair is calling for volunteers.
  • May 26
    not much happened today
    chatgpt o3 o4 bagel-7b medgemma acereason-nemotron-14b codex gemini openai bytedance google nvidia sakana-ai-labs deep-learning-ai gemini agenticseek anthropic agentic-systems multimodality reasoning code-generation prompt-engineering privacy ethical-ai emergence synthetic-data speech-instruction-tuning low-resource-languages humor scaling01 mervenoyann sakananailabs _philschmid omarsar0 teortaxestex andrewlampinen sedielem cis_female
    OpenAI plans to evolve ChatGPT into a super-assistant by 2025 with models like o3 and o4 enabling agentic tasks and supporting a billion users. Recent multimodal and reasoning model releases include ByteDance's BAGEL-7B, Google's MedGemma, and NVIDIA's ACEReason-Nemotron-14B. The Sudoku-Bench Leaderboard highlights ongoing challenges in AI creative reasoning. In software development, OpenAI's Codex aids code generation and debugging, while Gemini's Context URL tool enhances prompt context. AgenticSeek offers a local, privacy-focused alternative for autonomous agents. Ethical concerns are raised about AGI development priorities and Anthropic's alignment with human values. Technical discussions emphasize emergence in AI and training challenges, with humor addressing misconceptions about Gemini 3.0 and async programming in C. A novel synthetic speech training method enables instruction tuning of LLMs without real speech data, advancing low-resource language support.
  • May 23
    not much happened today
    claude-4 claude-4-opus claude-4-sonnet gemini-2.5-pro gemma-3n imagen-4-ultra anthropic google-deepmind openai codebase-understanding coding agentic-performance multimodality text-to-speech video-generation model-integration benchmarking memory-optimization cline amanrsanger ryanpgreenblatt johnschulman2 alexalbert__ nearcyan mickeyxfriedman jeremyphoward gneubig teortaxesTex scaling01 artificialanlys philschmid
    Anthropic's Claude 4 models (Opus 4, Sonnet 4) demonstrate strong coding abilities, with Sonnet 4 achieving 72.7% on SWE-bench and Opus 4 at 72.5%. Claude Sonnet 4 excels in codebase understanding and is considered SOTA on large codebases. Criticism arose over Anthropic's handling of ASL-3 security requirements. Demand for Claude 4 is high, with integration into IDEs and support from Cherry Studio and FastHTML. Google DeepMind introduced Gemini 2.5 Pro Deep Think and Gemma 3n, a mobile multimodal model reducing RAM usage by nearly 3x. Google's Imagen 4 Ultra ranks third in the Artificial Analysis Image Arena, available on Vertex AI Studio. Google also promoted Google Beam, an AI video model for immersive 3D experiences, and new text-to-speech models with multi-speaker support. The GAIA benchmark shows Claude 4 Opus and Sonnet leading in agentic performance.
  • May 22
    Anthropic releases Claude 4 Sonnet and Opus: Memory, Agent Capabilities, Claude Code, Redteam Drama
    claude-4 claude-4-opus claude-4-sonnet claude-3.5-sonnet anthropic instruction-following token-accounting pricing-models sliding-window-attention inference-techniques open-sourcing model-accessibility agent-capabilities-api extended-context model-deployment
    Anthropic has officially released Claude 4 with two variants: Claude Opus 4, a high-capability model for complex tasks priced at $15/$75 per million tokens, and Claude Sonnet 4, optimized for efficient everyday use. The release emphasizes instruction following and extended work sessions up to 7 hours. Community discussions highlight concerns about token pricing, token accounting transparency, and calls for open-sourcing Claude 3.5 Sonnet weights to support local model development. The news also covers Claude Code GA, new Agent Capabilities API, and various livestreams and reports detailing these updates. There is notable debate around sliding window attention and advanced inference techniques for local deployment.
  • May 21
    OpenAI buys Jony Ive's io for $6.5b, LMArena lands $100m seed from a16z
    gemini-2.5-pro gemini-diffusion openai lmarena a16z mistral-ai google google-deepmind multimodality reasoning code-generation math model-fine-tuning ai-assistants voice memory-optimization sundar_pichai
    OpenAI confirmed a partnership with Jony Ive to develop consumer hardware. LMArena secured a $100 million seed round from a16z. Mistral launched a new code model fine-tune. Google DeepMind announced multiple updates at Google I/O 2024, including over a dozen new models and 20 AI products. Key highlights include the release of Gemini 2.5 Pro and Gemini Diffusion, featuring advanced multimodal reasoning, coding, and math capabilities, and integration of Gemini in Google Chrome as an AI browsing assistant. Deep Think enhanced reasoning mode and Project Astra improvements were also introduced, focusing on voice output, memory, and computer control for a universal AI assistant.
  • May 20
    Google I/O: new Gemini native voice, Flash, DeepThink, AI Mode (DeepSearch+Mariner+Astra)
    gemini-2.5-pro gemini-2.5 google google-deepmind ai-assistants reasoning generative-ai developer-tools ai-integration model-optimization ai-application model-updates ai-deployment model-performance demishassabis philschmid jack_w_rae
    Google I/O 2024 showcased significant advancements with Gemini 2.5 Pro and Deep Think reasoning mode from google-deepmind, emphasizing AI-driven transformations and developer opportunities. GeminiApp aims to become a universal AI assistant on the path to AGI, with new features like AI Mode in Google Search expanding generative AI access. The event included multiple keynotes and updates on over a dozen models and 20+ AI products, highlighting Google's leadership in AI innovation. Influential voices like demishassabis and philschmid provided insights and recaps, while the launch of Jules as a competitor to Codex/Devin was noted.
  • May 19
    not much happened today
    kernelllm-8b gpt-4o deepseek-v3 mistral-medium-3 qwen3 blip3-o xgen-small anisora stable-audio-open-small alphaevolve meta-ai-fair mistral-ai qwen deepseek salesforce bilibili stability-ai google benchmarking model-performance multilinguality hardware-optimization multimodality image-generation video-generation text-to-audio model-parallelism chain-of-thought instruction-following reasoning mitigation-strategies reach_vb lmarena_ai theadimeline adcock_brett jxmnop dair_ai omarsar0
    Meta released KernelLLM 8B, outperforming GPT-4o and DeepSeek V3 on KernelBench-Triton Level 1. Mistral Medium 3 debuted strongly in multiple benchmarks. Qwen3 models introduced a unified framework with multilingual support. DeepSeek-V3 features hardware-aware co-design. BLIP3-o family released for multimodal tasks using diffusion transformers. Salesforce launched xGen-Small models excelling in long-context and math benchmarks. Bilibili released AniSORA for anime video generation. Stability AI open-sourced Stable Audio Open Small optimized for Arm devices. Google’s AlphaEvolve coding agent improved Strassen's algorithm for the first time since 1969. Research shows chain-of-thought reasoning can harm instruction-following ability, with mitigation strategies like classifier-selective reasoning being most effective, but reasoning techniques show high variance and limited generalization. "Chain-of-thought (CoT) reasoning can harm a model’s ability to follow instructions" and "Mitigation strategies such as few-shot in-context learning, self-reflection, self-selective reasoning, and classifier-selective reasoning can counteract reasoning-induced failures".
  • May 16
    ChatGPT Codex, OpenAI's first cloud SWE agent
    codex-1 openai-o3 codex-mini gemma-3 blip3-o qwen-2.5 marigold-iid deepseek-v3 lightlab gemini-2.0 lumina-next openai runway salesforce qwen deepseek google google-deepmind j1 software-engineering parallel-processing multimodality diffusion-models depth-estimation scaling-laws reinforcement-learning fine-tuning model-performance multi-turn-conversation reasoning audio-processing sama kevinweil omarsar0 iscienceluvr akhaliq osanseviero c_valenzuelab mervenoyann arankomatsuzaki jasonwei demishassabis philschmid swyx teortaxestex jaseweston
    OpenAI launched Codex, a cloud-based software engineering agent powered by codex-1 (an optimized version of OpenAI o3) available in research preview for Pro, Enterprise, and Team ChatGPT users, featuring parallel task execution like refactoring and bug fixing. The Codex CLI was enhanced with quick sign-in and a new low-latency model, codex-mini. Gemma 3 is highlighted as the best open model runnable on a single GPU. Runway released the Gen-4 References API for style transfer in generation. Salesforce introduced BLIP3-o, a unified multimodal model family using diffusion transformers for CLIP image features. The Qwen 2.5 models (1.5B and 3B versions) were integrated into the PocketPal app with various chat templates. Marigold IID, a new state-of-the-art open-source depth estimation model, was released. In research, DeepSeek shared insights on scaling and hardware for DeepSeek-V3. Google unveiled LightLab, a diffusion-based light source control in images. Google DeepMind's AlphaEvolve uses Gemini 2.0 to discover new math and reduce costs without reinforcement learning. Omni-R1 studied audio's role in fine-tuning audio LLMs. Qwen proposed a parallel scaling law inspired by classifier-free guidance. Salesforce released Lumina-Next on the Qwen base, outperforming Janus-Pro. A study found LLM performance degrades in multi-turn conversations due to unreliability. J1 is incentivizing LLM-as-a-Judge thinking via reinforcement learning. A new Qwen study correlates question and strategy similarity to predict reasoning strategies.
  • May 15
    Gemini's AlphaEvolve agent uses Gemini 2.0 to find new Math and cuts Gemini cost 1% — without RL
    gemini gpt-4.1 gpt-4o-mini o3 o4-mini google-deepmind openai algorithm-discovery coding-agents matrix-multiplication optimization reinforcement-learning model-weights training-efficiency safety-evaluations instruction-following coding-tasks model-releases _philschmid scott_swingle alex_dimakis henry jason_wei kevinweil michpokrass scaling01 gdb
    Deepmind's AlphaEvolve, a 2025 update to AlphaTensor and FunSearch, is a Gemini-powered coding agent for algorithm discovery that designs faster matrix multiplication algorithms, solves open math problems, and improves data center and AI training efficiency. It achieves a 23% faster kernel speedup in Gemini training and surpasses state-of-the-art on 20% of applied problems, including improvements on the Minimum Overlap Problem and Kissing number problem. Unlike Deep-RL, it optimizes code pieces rather than model weights. Meanwhile, OpenAI released GPT-4.1 in ChatGPT, specializing in coding and instruction following, with a faster alternative GPT-4.1 mini replacing GPT-4o mini for all users. OpenAI also launched the Safety Evaluations Hub and the OpenAI to Z Challenge using o3/o4 mini and GPT-4.1 models to discover archaeological sites. "Maybe midtrain + good search is all you need for AI for scientific innovation" - Jason Wei.
  • May 14
    Granola launches team notes, while Notion launches meeting transcription
    gpt-4.1 gpt-4o-mini gpt-4.1-mini claude-opus claude-sonnet claude-o3 qwen3 seed1.5-vl llama-4 am-thinking-v1 openai anthropic alibaba meta-ai-fair huggingface granola coding instruction-following benchmarking model-releases reasoning image-generation collaborative-software model-performance kevinweil scaling01 steph_palazzolo andersonbcdefg reach_vb yuchenj_uw qtnx_ _akhaliq risingsayak
    GPT-4.1 is now available in ChatGPT for Plus, Pro, and Team users, focusing on coding and instruction following, with GPT 4.1 mini replacing GPT 4o mini. Anthropic is releasing new Claude models including Claude Opus and Claude Sonnet, though some criticism about hallucinations in Claude O3 was noted. Alibaba shared the Qwen3 Technical Report with strong benchmark results from Seed1.5-VL. Meta FAIR announced new models and datasets but faced criticism on Llama 4. AM-Thinking-v1 launched on Hugging Face as a 32B scale reasoning model. Granola raised $43M in Series B and launched Granola 2.0 with a Notion-like UI. The AI ecosystem shows rapid iteration and cloning of ideas, emphasizing execution and distribution.
  • May 13
    not much happened today
    hunyuan-turbos qwen3-235b-a22b o3 gpt-4.1-nano grok-3 gemini-2.5-pro seed1.5-vl kling-2.0 tencent openai bytedance meta-ai-fair nvidia deepseek benchmarking model-performance moe reasoning vision video-understanding vision-language multimodality model-evaluation model-optimization lmarena_ai artificialanlys gdb _jasonwei iScienceLuvr _akhaliq _philschmid teortaxesTex mervenoyann reach_vb
    Tencent's Hunyuan-Turbos has risen to #8 on the LMArena leaderboard, showing strong performance across major categories and significant improvement since February. The Qwen3 model family, especially the Qwen3 235B-A22B (Reasoning) model, is noted for its intelligence and efficient parameter usage. OpenAI introduced HealthBench, a new health evaluation benchmark developed with input from over 250 physicians, where models like o3, GPT-4.1 nano, and Grok 3 showed strong results. ByteDance released Seed1.5-VL, a vision-language model with a 532M-parameter vision encoder and a 20B active parameter MoE LLM, achieving state-of-the-art results on 38 public benchmarks. In vision-language, Kling 2.0 leads image-to-video generation, and Gemini 2.5 Pro excels in video understanding with advanced multimodal capabilities. Meta's Vision-Language-Action framework and updates on VLMs for 2025 were also highlighted.
  • May 12
    Prime Intellect's INTELLECT-2 and PRIME-RL advance distributed reinforcement learning
    intellect-2 dreamo qwen gemini-2.5-pro dynamic-byte-latent-transformer gen-4-references mistral-medium-3 le-chat-enterprise primeintellect bytedance qwen gemma meta-ai-fair runwayml mistral-ai google distributed-training reinforcement-learning gpu-clusters model-optimization quantization multimodality agentic-ai video-understanding fine-tuning _akhaliq reach_vb osanseviero aiatmeta c_valenzuelab lmarena_ai adcock_brett
    Prime Intellect released INTELLECT-2, a decentralized GPU training and RL framework with a vision for distributed AI training overcoming colocation limits. ByteDance launched DreamO, a unified image customization model on Hugging Face. Qwen released models optimized for GPTQ, GGUF, and AWQ quantization. Gemma surpassed 150 million downloads on Hugging Face. Meta released weights for the Dynamic Byte Latent Transformer and the Collaborative Reasoner framework to improve language model efficiency and reasoning. RunwayML introduced Gen-4 References, a near-realtime model requiring no fine-tuning. Mistral AI released Mistral Medium 3, a strong multimodal model, and Le Chat Enterprise, an agentic AI assistant for business. Google updated Gemini 2.5 Pro Preview with video understanding and UI improvements. "Airbnb for spare GPUs from all over the world" highlights the ongoing challenges and potential of distributed GPU training.
  • May 09
    not much happened today
    gemini-2.5-flash gemini-2.0-flash mistral-medium-3 llama-4-maverick claude-3.7-sonnet qwen3 pangu-ultra-moe deepseek-r1 o4-mini x-reasoner google-deepmind mistral-ai alibaba huawei openai microsoft deepseek model-performance reasoning cost-analysis reinforcement-learning chain-of-thought multilinguality code-search model-training vision model-integration giffmana artificialanlys teortaxestex akhaliq john__allard
    Gemini 2.5 Flash shows a 12 point increase in the Artificial Analysis Intelligence Index but costs 150x more than Gemini 2.0 Flash due to 9x more expensive output tokens and 17x higher token usage during reasoning. Mistral Medium 3 competes with Llama 4 Maverick, Gemini 2.0 Flash, and Claude 3.7 Sonnet with better coding and math reasoning at a significantly lower price. Alibaba's Qwen3 family supports reasoning and multilingual tasks across 119 languages and includes a Web Dev tool for app building. Huawei's Pangu Ultra MoE matches DeepSeek R1 performance on Ascend NPUs, with new compute and upcoming V4 training. OpenAI's o4-mini now supports Reinforcement Fine-Tuning (RFT) using chain-of-thought reasoning. Microsoft's X-REASONER enables generalizable reasoning across modalities post-trained on general-domain text. Deep research integration with GitHub repos in ChatGPT enhances codebase search and reporting. The AI Engineer World's Fair offers an Early Bird discount for upcoming tickets.
  • May 08
    not much happened today
    open-code-reasoning-32b open-code-reasoning-14b open-code-reasoning-7b mistral-medium-3 llama-4-maverick gemini-2.5-pro gemini-2.5-flash claude-3.7-sonnet absolute-zero-reasoner x-reasoner fastvlm parakeet-asr openai nvidia mistral-ai google apple huggingface reinforcement-learning fine-tuning code-generation reasoning vision on-device-ai model-performance dataset-release model-optimization reach_vb artificialanlys scaling01 iscienceluvr arankomatsuzaki awnihannun risingsayak
    OpenAI launched both Reinforcement Finetuning and Deep Research on GitHub repos, drawing comparisons to Cognition's DeepWiki. Nvidia open-sourced Open Code Reasoning models (32B, 14B, 7B) with Apache 2.0 license, showing 30% better token efficiency and compatibility with llama.cpp, vLLM, transformers, and TGI. Independent evaluations highlight Mistral Medium 3 rivaling Llama 4 Maverick, Gemini 2.0 Flash, and Claude 3.7 Sonnet in coding and math reasoning, priced significantly lower but no longer open-source. Google's Gemini 2.5 Pro is noted as their most intelligent model with improved coding from simple prompts, while Gemini 2.5 Flash incurs a 150x cost increase over Gemini 2.0 Flash due to higher token usage and cost. The Absolute Zero Reasoner (AZR) achieves SOTA performance in coding and math reasoning via reinforced self-play without external data. Vision-language model X-REASONER is post-trained on general-domain text for reasoning. Apple ML research released FastVLM with on-device iPhone demo. HiDream LoRA trainer supports QLoRA fine-tuning under memory constraints. Nvidia's Parakeet ASR model tops Hugging Face ASR leaderboard with MLX implementation. New datasets SwallowCode and SwallowMath boost LLM performance in math and code. Overall, a quiet day with significant model releases and performance insights.
  • May 07
    AI Engineer World's Fair: Second Run, Twice The Fun
    gemini-2.5-pro google-deepmind waymo tesla anthropic braintrust retrieval-augmentation graph-databases recommendation-systems software-engineering-agents agent-reliability reinforcement-learning voice image-generation video-generation infrastructure security evaluation ai-leadership enterprise-ai mcp tiny-teams product-management design-engineering robotics foundation-models coding web-development demishassabis
    The 2025 AI Engineer World's Fair is expanding with 18 tracks covering topics like Retrieval + Search, GraphRAG, RecSys, SWE-Agents, Agent Reliability, Reasoning + RL, Voice AI, Generative Media, Infrastructure, Security, and Evals. New focuses include MCP, Tiny Teams, Product Management, Design Engineering, and Robotics and Autonomy featuring foundation models from Waymo, Tesla, and Google. The event highlights the growing importance of AI Architects and enterprise AI leadership. Additionally, Demis Hassabis announced the Gemini 2.5 Pro Preview 'I/O edition', which leads coding and web development benchmarks on LMArena.
  • May 06
    Gemini 2.5 Pro Preview 05-06 (I/O edition) - the SOTA vision+coding model
    gemini-2.5-pro claude-3.7-sonnet llama-nemotron qwen3 google-deepmind nvidia alibaba hugging-face multimodality coding reasoning model-release speech-recognition recommender-systems benchmarking demishassabis _philschmid lmarena_ai scaling01 fchollet
    Gemini 2.5 Pro has been updated with enhanced multimodal image-to-code capabilities and dominates the WebDev Arena Leaderboard, surpassing Claude 3.7 Sonnet in coding and other tasks. Nvidia released the Llama-Nemotron model family on Hugging Face, noted for efficient reasoning and inference. Alibaba's Qwen3 models range from 0.6B to 235B parameters, including dense and MoE variants. KerasRS was released by Franois Chollet as a new recommender system library compatible with JAX, PyTorch, and TensorFlow, optimized for TPUs. These updates highlight advancements in coding, reasoning, and speech recognition models.
  • May 05
    Cursor @ $9b, OpenAI Buys Windsurf @ $3b
    llama-nemotron-ultra llama-nemotron-super llama-nemotron-nano qwen3-235b-a22b prover-v2 phi-4-reasoning ernie-4.5-turbo ernie-x1-turbo suno-v4.5 gen-4-references o1-mini openai cursor nvidia alibaba deepseek microsoft baidu suno runway keras reasoning inference-efficiency open-license moe-models math-reasoning theorem-proving model-performance music-generation image-generation recommender-systems tpu-optimization _akhaliq adcock_brett lmarena_ai fchollet
    OpenAI is reportedly close to closing a deal with Windsurf, coinciding with Cursor's $900M funding round at a $9B valuation. Nvidia launched the Llama-Nemotron series featuring models from 8B to 253B parameters, praised for reasoning and inference efficiency. Alibaba released the Qwen3 family with MoE and dense models up to 235B parameters, ranking highly in coding and math benchmarks. DeepSeek introduced Prover-V2, an open-source AI for math reasoning with an 88.9% pass rate on MiniF2F-test. Microsoft released reasoning-focused Phi-4 models, outperforming OpenAI's o1-mini. Baidu debuted turbo versions of ERNIE 4.5 and X1 for faster, cheaper inference. Suno v4.5 added advanced AI music generation features, while Runway Gen-4 References enable placing characters into scenes with high consistency. KerasRS, a new recommender system library optimized for TPUs, was released by Franois Chollet.
  • May 02
    not much happened today
    qwen3-14b qwen3-32b qwen3-235b phi-4-reasoning o3-mini command-a gemini-2.5-pro o4-mini olm-o2-1b o3 alibaba together-ai scaling01 microsoft deepseek cohere google epoch-ai-research inception-labs openai allenai quantization fine-tuning reinforcement-learning benchmarking video-generation diffusion-models model-performance model-evaluation model-release text-generation cline _philschmid iscienceluvr alexalbert__ _lewtun teortaxestex sarahookr reach_vb
    Qwen model family released quantized versions of Qwen3 models including 14B, 32B, and 235B parameters, with promising coding capabilities in Qwen3-235B. Microsoft launched Phi-4-reasoning, a 14B parameter model distilled from OpenAI's o3-mini, emphasizing supervised fine-tuning and reinforcement learning, outperforming larger models in some benchmarks. Cohere's Command A leads SQL performance on Bird Bench. Google introduced the TRAJAN eval for video generation temporal consistency and updated the Gemini OpenAI compatibility layer. Inception Labs launched a diffusion LLM API claiming 5x speed improvements over autoregressive models. Community rankings show OpenAI's o3 model debuting strongly in web app-building tasks. Other releases include AllenAI's OLMo2 1B and additional Phi 4 variants. "Qwen3-235B shows promise for coding" and "Phi-4-reasoning tech report emphasizes SFT gains" highlight key advancements.
  • May 01
    not much happened today
    phi-4 phi-4-mini-reasoning qwen3-235b qwen3-moe-235b qwen3-moe-30b qwen3-dense-32b qwen3-dense-14b qwen3-dense-8b qwen3-dense-4b qwen3-dense-0.6b qwen2.5-omni-3b deepseek-prover-v2 llama llama-guard-4 prompt-guard-2 mimo-7b microsoft anthropic cursor alibaba togethercompute deepseek meta-ai-fair xiaomi openrouterai cohere reasoning model-fine-tuning model-evaluation benchmarking model-popularity open-source math model-scaling model-filtering jailbreak-prevention cline reach_vb vipulved akhaliq omarsar0 zhs05232838 huajian_xin mervenoyann karpathy random_walker sarahookr blancheminerva clefourrier
    Microsoft released Phi-reasoning 4, a finetuned 14B reasoning model slightly behind QwQ but limited by data transparency and token efficiency issues. Anthropic introduced remote MCP server support and a 45-minute Research mode in Claude. Cursor published a model popularity list. Alibaba launched Qwen3-235B and other Qwen3 variants, highlighting budget-friendly coding and reasoning capabilities, with availability on Together AI API. Microsoft also released Phi-4-Mini-Reasoning with benchmark performance on AIME 2025 and OmniMath. DeepSeek announced DeepSeek-Prover V2 with state-of-the-art math problem solving, scaling to 671B parameters. Meta AI's Llama models hit 1.2 billion downloads, with new Llama Guard 4 and Prompt Guard 2 for input/output filtering and jailbreak prevention. Xiaomi released the open-source reasoning model MiMo-7B trained on 25 trillion tokens. Discussions on AI model evaluation highlighted issues with the LMArena leaderboard, data access biases favoring proprietary models, and challenges in maintaining fair benchmarking, with suggestions for alternatives like OpenRouterAI rankings. "LMArena slop and biased" and "61.3% of all data going to proprietary model providers" were noted concerns.
  • Apr 30
    ChatGPT responds to GlazeGate + LMArena responds to Cohere
    qwen3-235b-a22b qwen3 qwen3-moe llama-4 openai cohere lm-arena deepmind x-ai meta-ai-fair alibaba vllm llamaindex model-releases model-benchmarking performance-evaluation open-source multilinguality model-integration fine-tuning model-optimization joannejang arankomatsuzaki karpathy sarahookr reach_vb
    OpenAI faced backlash after a controversial ChatGPT update, leading to an official retraction admitting they "focused too much on short-term feedback." Researchers from Cohere published a paper criticizing LMArena for unfair practices favoring incumbents like OpenAI, DeepMind, X.ai, and Meta AI Fair. The Qwen3 family by Alibaba was released, featuring models up to 235B MoE, supporting 119 languages and trained on 36 trillion tokens, with integration into vLLM and support in tools like llama.cpp. Meta announced the second round of Llama Impact Grants to promote open-source AI innovation. Discussions on AI Twitter highlighted concerns about leaderboard overfitting and fairness in model benchmarking, with notable commentary from karpathy and others.
  • Apr 29
    LlamaCon: Meta AI gets into the Llama API platform business
    llama-4 qwen3 qwen3-235b-a22b qwen3-30b-a3b qwen3-4b qwen2-5-72b-instruct o3-mini meta-ai-fair cerebras groq alibaba vllm ollama llamaindex hugging-face llama-cpp model-release fine-tuning reinforcement-learning moe multilingual-models model-optimization model-deployment coding benchmarking apache-license reach_vb huybery teortaxestex awnihannun thezachmueller
    Meta celebrated progress in the Llama ecosystem at LlamaCon, launching an AI Developer platform with finetuning and fast inference powered by Cerebras and Groq hardware, though it remains waitlisted. Meanwhile, Alibaba released the Qwen3 family of large language models, including two MoE models and six dense models ranging from 0.6B to 235B parameters, with the flagship Qwen3-235B-A22B achieving competitive benchmark results and supporting 119 languages and dialects. The Qwen3 models are optimized for coding and agentic capabilities, are Apache 2.0 licensed, and have broad deployment support including local usage with tools like vLLM, Ollama, and llama.cpp. Community feedback highlights Qwen3's scalable performance and superiority over models like OpenAI's o3-mini.
See all issues

Let's Connect

If you want to get in touch with me about something or just to say hi, reach out on social media or send me an email.

  • GitHub /
  • X (@smol_ai) /
  • swyx at smol dot ai
© 2025 • AINews
You can also subscribe by rss .
Press Esc or click anywhere to close