AINews
subscribe / issues / tags /

AINews

by smol.ai

How over 80k top AI Engineers keep up, every weekday.

Thanks for subscribing! Please check your email to confirm your subscription.

We respect your privacy. Full signup link here.

We summarize top AI discords + AI reddits + AI X/Twitters, and send you a roundup each day!

"Highest-leverage 45 mins I spend everyday" - Soumith

" best AI newsletter atm " and " I'm not sure that enough people subscribe " - Andrej

"genuinely incredible" - Chris

"surprisingly decent" - Hamel

You can pay for a customizable version here . Thanks to Pieter Levels for the Lex Fridman feature!

Last 30 days in AI

Invalid regex
See all issues
  • Aug 25
    not much happened today
    grok-2 grok-2.5 vibevoice-1.5b motif-2.6b gpt-5 qwen-code xai-org microsoft motif-technology alibaba huggingface langchain-ai mixture-of-experts model-scaling model-architecture text-to-speech fine-tuning training-data optimization reinforcement-learning agentic-ai tool-use model-training model-release api software-development model-quantization elonmusk clementdelangue rasbt quanquangu akhaliq eliebakouch gdb ericmitchellai ivanfioravanti deanwball giffmana omarsar0 corbtt
    xAI released open weights for Grok-2 and Grok-2.5 with a novel MoE residual architecture and μP scaling, sparking community excitement and licensing concerns. Microsoft open-sourced VibeVoice-1.5B, a multi-speaker long-form TTS model with streaming support and a 7B variant forthcoming. Motif Technology published a detailed report on Motif-2.6B, highlighting Differential Attention, PolyNorm, and extensive finetuning, trained on AMD MI250 GPUs. In coding tools, momentum builds around GPT-5-backed workflows, with developers favoring it over Claude Code. Alibaba released Qwen-Code v0.0.8 with deep VS Code integration and MCP CLI enhancements. The MCP ecosystem advances with LiveMCP-101 stress tests, the universal MCP server "Rube," and LangGraph Platform's rollout of revision queueing and ART integration for RL training of agents.
  • Aug 22
    not much happened today
    qwen-image-edit qwen-vl-max kling-2.1 veo-3 deepseek-v3.1 genie-3 sima google-deepmind alibaba google deepseek baseten yupp multimodality embodied-ai simulation fine-tuning quantization video-generation image-generation local-inference scaling agent-training real-time-control spatial-memory demishassabis bonniesjli shreyar ostrisai lmarena_ai teortaxestex ivanfioravanti
    DeepMind released Genie 3, an interactive multimodal world simulator with advanced spatial memory and real-time avatar control, and SIMA, an embodied training agent operating inside generated worlds. Alibaba introduced Qwen-Image-Edit, an open-weights image editor scoring ELO 1098 (#2) in the Image Editing Arena, running on Qualcomm NPUs, alongside Qwen-VL-Max entering the Vision top-20. Video models like Kling 2.1 showed a 235% improvement in frame control, with new entrants Luma Ray 2 and Runway Gen-4 Turbo debuting. Google provided free Veo 3 generations in Gemini App and enhanced Google Photos with natural-language edits. DeepSeek v3.1 launched with focus on SWE and Search agents, supporting local inference on Apple Silicon with 4-bit quantization achieving ~21 tok/s on M3 Ultra. The news highlights advances in interactive simulation, vision editing, video synthesis, and scalable local AI inference.
  • Aug 21
    Cohere Command A Reasoning beats GPT-OSS-120B and DeepSeek R1 0528
    command-a-reasoning deepseek-v3.1 cohere deepseek intel huggingface baseten vllm-project chutes-ai anycoder agentic-ai hybrid-models long-context fp8-training mixture-of-experts benchmarking quantization reasoning coding-workflows model-pricing artificialanlys reach_vb scaling01 cline ben_burtenshaw haihaoshen jon_durbin _akhaliq willccbb teortaxestex
    Cohere's Command A Reasoning model outperforms GPT-OSS in open deep research capabilities, emphasizing agentic use cases for 2025. DeepSeek-V3.1 introduces a hybrid reasoning architecture toggling between reasoning and non-reasoning modes, optimized for agentic workflows and coding, with extensive long-context pretraining (~630B tokens for 32k context, ~209B for 128k), FP8 training, and a large MoE expert count (~37B). Benchmarks show competitive performance with notable improvements in SWE-Bench and other reasoning tasks. The model supports a $0.56/M input and $1.68/M output pricing on the DeepSeek API and enjoys rapid ecosystem integration including HF weights, INT4 quantization by Intel, and vLLM reasoning toggles. Community feedback highlights the hybrid design's pragmatic approach to agent and software engineering workflows, though some note the lack of tool use in reasoning mode.
  • Aug 20
    DeepSeek V3.1: 840B token continued pretrain, beating Claude 4 Sonnet at 11% of its cost
    deepseek-v3.1 seed-oss-36b computerrl gemini-2.5-pro gpt-5 claude-code gpt-oss-120b gpt-oss-20b deepseek bytedance zhipu-ai github microsoft anthropic together-ai baseten huggingface token-efficiency coding agentic-benchmarks long-context reinforcement-learning developer-tools fine-tuning multinode-training model-release teortaxestex rasbt lukehoban burkeholland _catwu cline winglian
    DeepSeek released DeepSeek V3.1, a quietly rolled out open model with an 128K context window and improvements in token efficiency, coding, and agentic benchmarks. ByteDance launched the permissive Seed-OSS 36B model on Hugging Face, noted for long-context and reasoning capabilities. Zhipu AI introduced ComputerRL, a reinforcement learning framework for computer-use agents, achieving strong benchmark results. In developer tooling, GitHub Copilot expanded globally, Microsoft VS Code integrated Gemini 2.5 Pro and updated GPT-5 agent prompts, and Anthropic launched Claude Code seats with spend controls. Open-source fine-tuning advances include Together AI adding SFT for gpt-oss-120B/20B and Baseten enabling multinode 120B training with Truss CLI. The community noted mixed performance and ongoing post-training adjustments for DeepSeek V3.1.
  • Aug 19
    Databricks' $100B Series K
    deepseek-v3.1-base deepseek-v3.1-instruct chatgpt-go qwen-image-edit databricks openai deepseek hugging-face alibaba model-release benchmarking pricing-models fine-tuning model-architecture image-editing video-generation api agentic-ai sama nickaturley kevinweil gdb sherwinwu nptacek reach_vb clementdelangue teortaxestex quixiai georgejrjrjr scaling01 alibaba_qwen linoy_tsaban ostrisai lmarena_ai
    Databricks reached a $100 billion valuation, becoming a centicorn with new Data (Lakebase) and AI (Agent Bricks) products. OpenAI launched ChatGPT Go in India at ₹399/month (~$4.55), offering significantly increased usage limits and UPI payment support, with plans for global expansion. The DeepSeek V3.1 Base/Instruct models were quietly released on Hugging Face, showing strong coding benchmark performance and adopting an Anthropic-style hybrid system. The Qwen-Image-Edit model from Alibaba is gaining traction with integrations and community pruning experiments. "DeepSeek V3.1 Base outperforms Claude 4 Opus on coding benchmarks" and "ChatGPT Go offers 10x higher message limits and 2x longer memory" highlight key advancements.
  • Aug 18
    not much happened today
    gemma-3-270m canary-1b parakeet-tdt-0.6b nemotron-nano-v2 qwen-image-edit dino-v3 nvidia alibaba tencent meta-ai-fair ibm datology synthetic-data multilingual-asr self-supervised-learning vision model-efficiency training-data data-augmentation model-speedup domain-transfer demishassabis adrgrondin rasbt reach_vb ctnzr clementdelangue natolambert _akhaliq itspaulai mervenoyann xenovacom tomaarsen pratyushmaini code_star leavittron k_schuerholt giffmana
    Gemma 3 270M, an ultra-small model optimized for edge and mobile use, was released and is gaining adoption. NVIDIA launched two open multilingual ASR models, Canary 1B and Parakeet-TDT 0.6B, trained on 1 million hours of data with CC-BY licensing, plus the efficient Nemotron-Nano v2 9B model with significant speedups. Alibaba's Qwen-Image-Edit offers bilingual text editing and semantic image transformations. Tencent Hunyuan introduced a controllable game-world video generator trained on over 1 million gameplay recordings. Meta's DINOv3 presents a scalable self-supervised vision backbone with strong domain transfer capabilities. IBM quietly released efficient English embedding models under a commercial-friendly license. The BeyondWeb synthetic data paper shows significant training speed and performance gains over prior datasets. Analysis of HRM architecture suggests performance improvements largely stem from data augmentation and scaffolding rather than novel architecture. "Models and datasets are openly licensed and available on Hugging Face."
  • Aug 15
    not much happened today
    gpt-5 gpt-5-high gpt-5-mini-high gpt-5-nano-high imagen-4 gemma-3-270m openai google lmsys model-releases model-performance prompt-engineering developer-tools image-generation model-optimization transformers tokenization model-scaling sama aidan_mclau kevinweil lmarena_ai edwinarbus gdb omarsar0 philschmid m4rkmc
    OpenAI rolled out GPT-5 as the default in ChatGPT with new modes and a "warmer" personality, plus expanded message limits for Plus/Team users and Enterprise/Edu access. Performance rankings show gpt-5-high leading, with smaller variants also ranked, though critiques note some underperformance versus Chinese models and sensitivity to sycophancy. OpenAI enhanced developer tools with a "Quick eval" feature, coding tips, and an improved Playground. Google released Imagen 4 generally available with faster generation and higher resolution, plus the ultra-small Gemma 3 270M model with a large vocabulary and ecosystem support. Podcasts featured OpenAI leaders discussing GPT-5 systems, routing, and efficiency.
  • Aug 14
    Western Open Models get Funding: Cohere $500m @ 6.8B, AI2 gets $152m NSF+NVIDIA grants
    gpt-5 o3 command-a gemma-3-270m imagen-4 dinov3 openai perplexity-ai ai2 nvidia cohere meta-ai-fair google hugging-face ollama unsloth model-speed funding ai-infrastructure on-device-ai quantization embedding-models image-generation self-supervised-learning vision dense-prediction benchmarking instruction-following model-optimization model-release challenge joelle_pineau fchollet awnihannun _philschmid osanseviero
    OpenAI's GPT-5 achieved a speedrun of Pokemon Red 3x faster than o3. Perplexity raised $200M at a $20B valuation. AI2 secured $75M NSF grants and $77M from NVIDIA for AI infrastructure projects like Olmo and Molmo. Cohere raised $500M and hired Joelle Pineau from meta-ai-fair, boosting models like Command A. Google released the Gemma 3 270M on-device tiny LLM with INT4 QAT checkpoints and large embedding tables, and made Imagen 4 generally available with a fast version at $0.02/image. Meta-ai-fair introduced DINOv3, a family of self-supervised vision foundation models with high-resolution dense features and strong performance on benchmarks like COCO detection and ADE20K segmentation, under a permissive license. A $150,000 MiniMax AI Agent Challenge is ongoing with 200+ prizes, encouraging AI project builds by August 25.
  • Aug 13
    not much happened today
    gpt-5 gpt-oss-120b opus-4.1 sonnet-4 openai anthropic minimax context-windows model-routing model-hosting multi-tool-pipelines prompt-caching model-extraction model-pairing cost-efficiency model-optimization sama jeremyphoward jxmnop _catwu
    OpenAI continues small updates to GPT-5, introducing "Auto/Fast/Thinking" modes with 196k token context, 3,000 messages/week, and dynamic routing to cheaper models for cost efficiency. The MiniMax AI Agent Challenge offers $150,000 in prizes for AI agent development by August 25. The community discusses GPT-OSS-120B base model extraction, hosting, and tooling improvements, including multi-tool pipelines and flex-attention. Anthropic announces model pairing in Claude Code with Opus 4.1 for planning and Sonnet 4 for execution, expanding context to 1M tokens and introducing prompt caching. Key figures include @sama, @jeremyphoward, @jxmnop, and @_catwu.
  • Aug 12
    not much happened today
    gpt-5 gpt-5-mini gpt-5-nano claude-sonnet-4 glm-4.5v genie-3 gemini-app qwen-image-distilled matrix-game-2.0 jan-v1 qwen3-4b-thinking openai anthropic zhipu-ai google-deepmind alibaba skywork jan-ai context-window multimodality reinforcement-learning agentic-tasks video-generation image-generation real-time-systems web-search model-accuracy developer-tools open-source-models long-context model-scaling
    OpenAI released the GPT-5 series including GPT-5-mini and GPT-5-nano, with mixed user feedback on performance and API behavior. Anthropic extended Claude Sonnet 4 context window to 1 million tokens, a 5x increase, enhancing large document processing. Zhipu AI launched the open-source multimodal GLM-4.5V model with improvements in RL scaling and agentic tasks. Google DeepMind showcased the video generation model Genie 3 and updated the Gemini App with new features like Deep Think and Gemini Live. Alibaba Qwen released the distilled image model Qwen-Image distilled and enhanced their Deep Research capabilities. Open source models like Skywork's Matrix-Game 2.0 and Jan.ai's Jan-v1 (built on Qwen3-4B-Thinking) were introduced, focusing on real-time world modeling and web search respectively. Developer tools such as Claude Code and Cursor were also highlighted.
  • Aug 11
    OpenAI's IMO Gold model also wins IOI Gold
    gpt-5 gpt-5-thinking gpt-5-mini gemini-2.5-pro claude opus-4.1 openai google-deepmind anthropic reinforcement-learning benchmarking model-performance prompt-engineering model-behavior competitive-programming user-experience model-naming model-selection hallucination-detection sama scaling01 yanndubs sherylhsu ahmed_el-kishky jerry_tworek noam_brown alex_wei amandaaskell ericmitchellai jon_durbin gdb jerryjliu0
    OpenAI announced placing #6 among human coders at the IOI, reflecting rapid progress in competitive coding AI over the past two years. The GPT-5 launch faced significant user backlash over restrictive usage limits and removal of model selection control, leading to a reversal and increased limits to 3000 requests per week for Plus users. Confusion around GPT-5 naming and benchmarking was highlighted, with critiques on methodological issues comparing models like Claude and Gemini. Performance reviews of GPT-5 are mixed, with claims of near-zero hallucinations by OpenAI staff but user reports of confidence in hallucinations and steering difficulties. Benchmarks show GPT-5 mini performing well on document understanding, while the full GPT-5 is seen as expensive and middling. On the Chatbot Arena, Gemini 2.5 Pro holds a 67% winrate against GPT-5 Thinking. Prompting and model behavior remain key discussion points.
  • Aug 08
    not much happened today
    gpt-5 gpt-4o grok-4 claude-4-sonnet openai microsoft reasoning latency model-routing benchmarking reinforcement-learning hallucination-control creative-writing priority-processing api-traffic model-deprecation user-experience model-selection voice-mode documentation sama nickaturley elaineyale6 scaling01 mustafasuleyman kevinweil omarsar0 jeremyphoward juberti epochairesearch lechmazur gdb
    OpenAI launched GPT-5 with a unified user experience removing manual model selection, causing initial routing and access issues for Plus users that are being addressed with fixes including restored model options and increased usage limits. GPT-5 introduces "Priority Processing" for lower latency at higher price tiers, achieving ~750ms median time-to-first-token in some cases. Microsoft reports full Copilot adoption of GPT-5, and API traffic doubled within 24 hours, peaking at 2 billion tokens per minute. Early benchmarks show GPT-5 leading in reasoning tasks like FrontierMath and LiveBench, with improvements in hallucination control and creative writing, though some models like Grok-4 and Claude-4 Sonnet Thinking outperform it in specific RL-heavy reasoning benchmarks. OpenAI also released extensive migration and feature guides but faced some rollout issues including a broken code sample and a problematic Voice Mode launch. "Unified GPT-5" ends model pickers, pushing developers away from manual model selection.
  • Aug 07
    OpenAI rolls out GPT-5 and GPT-5 Thinking to >1B users worldwide; -mini and -nano help claim Pareto Frontier
    gpt-5 gpt-5-mini gpt-5-nano claude-4.1-sonnet claude-4.1-opus openai cursor_ai jetbrains microsoft notion perplexity_ai factoryai model-architecture context-windows pricing-models coding long-context prompt-engineering model-benchmarking model-integration tool-use reasoning sama scaling01 jeffintime embirico mustafasuleyman cline lmarena_ai nrehiew_ ofirpress sauers_
    OpenAI launched GPT-5, a unified system featuring a fast main model and a deeper thinking model with a real-time router, supporting up to 400K context length and aggressive pricing that reclaims the Pareto Frontier of Intelligence. The rollout includes variants like gpt-5-mini and gpt-5-nano with significant cost reductions, and integrations with products such as ChatGPT, Cursor AI, JetBrains AI Assistant, Microsoft Copilot, Notion AI, and Perplexity AI. Benchmarks show GPT-5 performing strongly in coding and long-context reasoning, roughly matching Claude 4.1 Sonnet/Opus on SWE-bench Verified. The launch was accompanied by a GPT-5 prompting cookbook and notable community discussions on pricing and performance.
  • Aug 06
    not much happened today
    gpt-oss-120b gpt-oss-20b kimi-k2 deepseek-r1 qwen-3-32b openai huggingface microsoft llamaindex ollama baseten fireworksai cerebras groq together anthropic google uk-aisi sliding-window-attention mixture-of-experts rope context-length mxfp4-format synthetic-data reasoning-core-hypothesis red-teaming benchmarking coding-benchmarks model-performance fine-tuning woj_zaremba sama huybery drjimfan jxmnop scaling01 arunv30 kevinweil xikun_zhang_ jerryjliu0 ollama basetenco reach_vb gneubig shxf0072 _lewtun
    OpenAI released its first open models since GPT-2, gpt-oss-120b and gpt-oss-20b, which quickly trended on Hugging Face. Microsoft supports these models via Azure AI Foundry and Windows Foundry Local. Key architectural innovations include sliding window attention, mixture of experts (MoE), a RoPE variant, and a 256k context length. The models use a new MXFP4 format supported by llama.cpp. Hypotheses suggest gpt-oss was trained on synthetic data to enhance safety and performance, supporting the Reasoning Core Hypothesis. OpenAI announced a $500K bounty for red teaming with partners including Anthropic, Google, and the UK AISI. Performance critiques highlight inconsistent benchmarking results, with GPT-OSS-120B scoring 41.8% on the Aider Polyglot coding benchmark, trailing competitors like Kimi-K2 and DeepSeek-R1. Some users note the model excels in math and reasoning but lacks common sense and practical utility.
  • Aug 05
    OpenAI's gpt-oss 20B and 120B, Claude Opus 4.1, DeepMind Genie 3
    gpt-oss-120b gpt-oss-20b gpt-oss claude-4.1-opus claude-4.1 genie-3 openai anthropic google-deepmind mixture-of-experts model-architecture agentic-ai model-training model-performance reasoning hallucination-detection gpu-optimization open-weight-models realtime-simulation sama rasbt sebastienbubeck polynoamial kaicathyc finbarrtimbers vikhyatk scaling01 teortaxestex
    OpenAI released the gpt-oss family, including gpt-oss-120b and gpt-oss-20b, their first open-weight models since GPT-2, designed for agentic tasks and licensed under Apache 2.0. These models use a Mixture-of-Experts (MoE) architecture with wide vs. deep design and innovative features like bias units in attention and a unique swiglu variant. The 120B model was trained with about 2.1 million H100 GPU hours. Meanwhile, Anthropic launched claude-4.1-opus, touted as the best coding model currently. DeepMind showcased genie-3, a realtime world simulation model with minute-long consistency. The releases highlight advances in open-weight models, reasoning capabilities, and world simulation. Key figures like @sama, @rasbt, and @SebastienBubeck provided technical insights and performance evaluations, noting strengths and hallucination risks.
  • Aug 04
    Qwen-Image: SOTA text rendering + 4o-imagegen-level Editing Open Weights MMDiT
    qwen-image mmdit gemini-2.5 o3-pro seedprover glm-4.5 xbai-o4 hunyuan alibaba google-deepmind openai bytedance kaggle tencent bilingual-text-rendering image-generation image-editing synthetic-data reasoning math-theorem-proving benchmarking instruction-following model-efficiency open-weight-models model-transparency competitive-evaluation swyx demishassabis tulseedoshi mparakhin teortaxestex cgeorgiaw dorialexander steph_palazzolo corbtt synthwavedd epochairesearch
    Alibaba surprised with the release of Qwen-Image, a 20B MMDiT model excelling at bilingual text rendering and graphic poster creation, with open weights and demos available. Google DeepMind launched Gemini 2.5 Deep Think to Ultra subscribers, showing significant reasoning improvements and benchmark gains (+11.2% AIME, +13.2% HLE, +13.4% LiveCodeBench) rivaling OpenAI's o3 Pro. ByteDance's SeedProver achieved state-of-the-art math theorem proving results, surpassing DeepMind's AlphaGeometry2. OpenAI is developing a "universal verifier" for math and coding gains transfer. Competitive reasoning benchmarks and game arenas by Google and Kaggle highlight a meta-shift in reasoning model efficiency, comparable to the original Transformer leap. Other open-weight models gaining momentum include GLM-4.5, XBai o4, and Tencent Hunyuan with a focus on efficient training. "Qwen is all you need."
  • Aug 01
    Gemini 2.5 Deep Think finally ships
    gemini-2.5-deep-think gpt-oss gpt-5 kimi-k2-turbo-preview qwen3-coder-flash glm-4.5 step-3 claude openai anthropic google-deepmind kimi-moonshot alibaba ollama zhipu-ai stepfun parallel-thinking model-releases moe attention-mechanisms multimodal-reasoning model-performance context-windows open-source-models model-leaks creative-ai coding reasoning model-optimization demishassabis philschmid scaling01 teortaxestex teknium1 lmarena_ai andrewyng
    OpenAI is rumored to soon launch new GPT-OSS and GPT-5 models amid drama with Anthropic revoking access to Claude. Google DeepMind quietly launched Gemini 2.5 Deep Think, a model optimized for parallel thinking that achieved gold-medal level at the IMO and excels in reasoning, coding, and creative tasks. Leaks suggest OpenAI is developing a 120B MoE and a 20B model with advanced attention mechanisms. Chinese AI companies like Kimi Moonshot, Alibaba, and ZHIpu AI are releasing faster and more capable open models such as kimi-k2-turbo-preview, Qwen3-Coder-Flash, and GLM-4.5, signaling strong momentum and potential to surpass the U.S. in AI development. "The final checkpoint was selected just 5 hours before the IMO problems were released," highlighting rapid development cycles.
  • Jul 31
    Figma's $50+b IPO
    horizon-alpha gpt-5 gemini-2.5-pro qwen3-coder qwen3-coder-flash-30b-a3b command-a-vision gpt-4.1 llama-4-maverick flux-1-krea-dev glm-4.5 voxtral openai openrouter alibaba unslothai cohere huggingface black-forest-labs diffusers ostrisai zhipu-ai together-ai mistral-ai reasoning svg-generation agentic-ai context-windows vision fine-tuning inference-time-training model-generalization open-models technical-reports scaling01 teortaxestex huybery nickfrosst aidangomez reach_vb zai_org corbtt jxmnop teknuim1
    OpenAI's stealth model horizon-alpha on OpenRouter sparks speculation as a precursor to GPT-5, showing strong reasoning and SVG generation capabilities, comparable to Gemini 2.5 Pro. Alibaba released the Qwen3-Coder family, including a fast Qwen3-Coder-Flash (30B-A3B) variant with agentic features and 1M context length support via UnslothAI. Cohere launched Command A Vision, a 111B parameter open-weights vision-language model outperforming GPT-4.1 and Llama 4 Maverick on enterprise benchmarks. Black Forest Labs introduced FLUX.1 Krea [dev], an open-weights photorealism model compatible with fine-tuning tools like diffusers and ostrisai. Zhipu AI unveiled GLM-4.5, a hybrid reasoning open model with agentic capabilities available on Together AI. Discussions highlight the rising importance of inference-time training and reasoning model generalization. Mistral AI released the technical report for Voxtral continuing its open science efforts.
  • Jul 30
    not much happened today
    glm-4.5 glm-4.5-air qwen3-coder qwen3-235b kimi-k2 grok-imagine wan-2.2 smollm3 figure-01 figure-02 vitpose++ chatgpt zhipu-ai alibaba moonshot-ai x-ai figure openai runway mlx ollama deeplearningai model-releases model-performance moe image-generation video-generation pose-estimation robotics training-code-release interactive-learning in-context-learning yuchenj_uw corbtt reach_vb ollama deeplearningai gdb sama c_valenzuelab adcock_brett skalskip92 loubnabenallal1 hojonathanho ostrisai
    Chinese AI labs have released powerful open-source models like GLM-4.5 and GLM-4.5-Air from Zhipu AI, Qwen3 Coder and Qwen3-235B from Alibaba, and Kimi K2 from Moonshot AI, highlighting a surge in permissively licensed models. Zhipu AI's GLM-4.5 is a 355B parameter MoE model competitive with Claude 4 Opus and Gemini 2.5 Pro. Alibaba's Qwen3 Coder shows strong code generation performance with a low edit failure rate, while Moonshot AI's Kimi K2 is a 1 trillion-parameter MoE model surpassing benchmarks like LiveCodeBench. In video and image generation, xAI launched Grok Imagine, and Wan2.2 impressed with innovative image-to-video generation. Robotics advances include Figure's Figure-01 and Figure-02 humanoid robots and ViTPose++ for pose estimation in basketball analysis. SmolLM3 training and evaluation code was fully released under Apache 2.0. OpenAI introduced Study Mode in ChatGPT to enhance interactive learning, and Runway rolled out Runway Aleph, a new in-context video model for multi-task visual generation. The community notes a competitive disadvantage for organizations avoiding these Chinese open-source models. "Orgs avoiding these models are at a significant competitive disadvantage," noted by @corbtt.
  • Jul 29
    not much happened today
    glm-4.5 glm-4.5-air qwen3-coder qwen3-235b kimi-k2 wan-2.2 grok-imagine smollm3 figure-01 figure-02 vitpose++ zhipu-ai alibaba moonshot-ai x-ai ideogram figure smollm openai model-releases moe model-benchmarking image-generation video-generation pose-estimation robotics training-code-release apache-license yuchenj_uw corbtt cline reach_vb ollama deeplearningai ostrisai hojonathanho adcock_brett skalskip92 loubnabenallal1
    Chinese labs have released a wave of powerful, permissively licensed models in July, including Zhipu AI's GLM-4.5 and GLM-4.5-Air, Alibaba's Qwen3 Coder and Qwen3-235B, and Moonshot AI's Kimi K2. These models feature large-scale Mixture of Experts architectures with active parameters ranging from 3B to 32B and context windows up to 256K tokens. Zhipu AI's GLM-4.5 competes with Claude 4 Opus and Gemini 2.5 Pro in benchmarks. Moonshot AI's Kimi K2 is a 1 trillion-parameter MoE model surpassing other open-weight models on LiveCodeBench and AceBench. In video and image generation, xAI launched Grok Imagine, and Wan2.2 impressed with its Image-to-Video approach. Ideogram released a character consistency model. Robotics advances include Figure's Figure-01 and Figure-02 humanoid robots and ViTPose++ for pose estimation in basketball analysis. The SmolLM3 training and evaluation code was fully released under an Apache 2.0 license. "Orgs avoiding these Chinese open-source models are at a significant competitive disadvantage," noted by @corbtt.
  • Jul 28
    GLM-4.5: Deeper, Headier, & better than Kimi/Qwen/DeepSeek (SOTA China LLM?)
    glm-4.5-355b-a32b glm-4.5-air-106b-a12b qwen3-coder claude-4-opus grok-4 o3 gpt-4.1 gpt-5 kimi-k2 claude-sonnet-4 z-ai alibaba huggingface openai reinforcement-learning token-efficiency model-optimization open-source-models agentic-ai coding model-training lupantech teortaxestex mervenoyann _lewtun scaling01 cline
    Z.ai (Zhipu AI) released the GLM-4.5-355B-A32B and GLM-4.5-Air-106B-A12B open weights models, claiming state-of-the-art performance competitive with Claude 4 Opus, Grok 4, and OpenAI's o3. These models emphasize token efficiency and efficient reinforcement learning training validated by the Muon optimizer. Alibaba Qwen introduced Group Sequence Policy Optimization (GSPO), a new reinforcement learning algorithm powering the Qwen3 model suite, integrated into Hugging Face's TRL library. Speculation surrounds mystery models "summit" and "zenith" as potential GPT-5 variants based on GPT-4.1 architecture. Qwen3-Coder shows strong coding benchmark results, rivaling Claude Sonnet 4 and Kimi K2. The rise of powerful Chinese open-source models like GLM-4.5, Wan-2.2, and Qwen3 Coder contrasts with a slowdown from Western labs such as OpenAI.
  • Jul 25
    not much happened today
    gpt-5 gpt4-0314 qwen3-235b-thinking runway-aleph imagen-4-ultra smollm3 grok-4 openai alibaba runway hugging-face google anthropic pytorch lmarena reinforcement-learning reasoning video-generation image-generation model-optimization open-source model-performance inference-speed integration stability sama clementdelangue xikun_zhang_ teknnium1 chujiezheng
    OpenAI has fully rolled out its ChatGPT agent to all Plus, Pro, and Team users and is building hype for the upcoming GPT-5, which reportedly outperforms Grok-4 and can build a cookie clicker game in two minutes. Alibaba's Qwen team released the open-source reasoning model Qwen3-235B-Thinking, achieving an 89% win rate over gpt4-0314 using a new RL algorithm called Group Sequence Policy Optimization (GSPO). Runway introduced Runway Aleph, a state-of-the-art in-context video model for editing and generating video content. Hugging Face highlights the growing momentum of open-source AI, especially from Chinese teams. Other updates include Kling's upgrades for image-to-video generation and Google's Imagen 4 Ultra being recognized as a top text-to-image model. Anthropic integrated Claude with Canva for branded visual designs but faces stability issues. The PyTorch team released optimized checkpoints for SmolLM3 to speed up inference.
  • Jul 24
    3x in 3 months: Cursor @ $28b, Cognition + Windsurf @ $10b
    qwen3-coder chatgpt-agent claude-code mini cursor cognition windsurf alibaba openai anthropic perplexity agentic-ai fundraising software-engineering ai-coding agentic-economy model-integration community-feedback performance-benchmarking bindureddy xikun_zhang_ aravsrinivas gergelyorosz jeremyphoward
    Cursor is reportedly fundraising at a $28 billion valuation with $1 billion ARR, while the combined Cognition+Windsurf entity is fundraising at a $10 billion valuation after acquiring Windsurf remainco for $300 million. The competition between AI coding agents intensifies as Cursor focuses on Async SWE Agents and Cognition+Windsurf acquires an agentic IDE. Alibaba's Qwen3-Coder gains widespread adoption for coding tasks and integration into tools like Claude Code and LM Studio. OpenAI rolls out ChatGPT Agent to all Plus, Pro, and Team users, sparking discussions about an "agentic economy" emphasizing AI literacy. Anthropic's Claude Code is praised as a premier development tool with active community feedback. Perplexity's Comet browser assistant receives positive reviews and new feature showcases. The debate continues on whether AI coding tools will replace developers, with critiques highlighting the ongoing human effort required. A new minimalistic software engineering agent, mini, achieves 65% on SWE-bench with just 100 lines of code.
  • Jul 23
    not much happened today
    qwen3-coder-480b-a35b-instruct kimi-k2 alibaba openrouterai togethercompute vllm_project unslothai white-house code-generation benchmarking model-integration context-windows open-source national-security infrastructure ai-policy fchollet clementdelangue scaling01 aravsrinivas rasbt gregkamradt yuchenj_uw
    Alibaba announced the release of Qwen3-Coder-480B-A35B-Instruct, an open agentic code model with 480B parameters and 256K context length, praised for rapid development and strong coding performance. Benchmark claims of 41.8% on ARC-AGI-1 faced skepticism from Franois Chollet and others due to reproducibility issues. The model quickly integrated into ecosystems like vLLM, Dynamic GGUFs, and OpenRouterAI. The White House unveiled a new AI Action Plan emphasizing Innovation, Infrastructure, and International Diplomacy, linking AI leadership to national security and prioritizing compute access for the Department of Defense. The plan sparked debate on open vs. closed-source AI, with calls from Clement Delangue to embrace open science to maintain US AI competitiveness.
  • Jul 22
    not much happened today
    kimi-k2 qwen3-235b-a22b qwen3-coder-480b-a35b gemini-2.5-flash-lite mistral-7b deepseek-v3 moonshot-ai alibaba google google-deepmind openai hugging-face vllm-project mixture-of-experts agentic-ai model-optimization model-training benchmarking code-generation long-context multimodality math reinforcement-learning model-architecture model-performance open-source alignment demishassabis rasbt alexwei_ yitayml
    Moonshot AI released the Kimi K2, a 1-trillion parameter ultra-sparse Mixture-of-Experts (MoE) model with the MuonClip optimizer and a large-scale agentic data pipeline using over 20,000 tools. Shortly after, Alibaba updated its Qwen3 model with the Qwen3-235B-A22B variant, which outperforms Kimi K2 and other top models on benchmarks like GPQA and AIME despite being 4.25x smaller. Alibaba also released Qwen3-Coder-480B-A35B, a MoE model specialized for coding with a 1 million token context window. Google DeepMind launched Gemini 2.5 Flash-Lite, a faster and more cost-efficient model outperforming previous versions in coding, math, and multimodal tasks. The MoE architecture is becoming mainstream, with models like Mistral, DeepSeek, and Kimi K2 leading the trend. In mathematics, an advanced Gemini model achieved a gold medal level score at the International Mathematical Olympiad (IMO), marking a first for AI. An OpenAI researcher noted their IMO model "knew" when it did not have a correct solution, highlighting advances in model reasoning and self-awareness.
  • Jul 21
    OAI and GDM announce IMO Gold-level results with natural language reasoning, no specialized training or tools, under human time limits
    gemini-1.5-pro o1 openai google-deepmind reinforcement-learning reasoning model-scaling fine-tuning model-training benchmarking natural-language-processing terence_tao oriol_vinyals alexander_wei jerry_tworek paul_christiano eliezer_yudkowsky
    OpenAI and Google DeepMind achieved a major milestone by solving 5 out of 6 problems at the International Mathematical Olympiad (IMO) 2025 within the human time limit of 4.5 hours, earning the IMO Gold medal. This breakthrough was accomplished using general-purpose reinforcement learning and pure in-weights reasoning without specialized tools or internet access, surpassing previous systems like AlphaProof and AlphaGeometry2. The success resolved a 3-year-old AI bet on AI's capability to solve IMO problems and sparked discussions among mathematicians including Terence Tao. Despite this, 26 human competitors remain better than AI on the hardest combinatorics problem (P6). The achievement highlights advances in reinforcement-learning, reasoning, and model-scaling in AI research.
  • Jul 17
    ChatGPT Agent: new o* model + unified Deep Research browser + Operator computer use + Code Interpreter terminal
    o3 o4 gptnext openai reinforcement-learning benchmarking model-performance model-risk long-context model-deployment fine-tuning sama gdb kevinweil xikun_zhang_ keren_gu boazbaraktcs
    OpenAI launched the ChatGPT Agent, a new advanced AI system capable of browsing the web, coding, analyzing data, and creating reports, marking a significant step towards human-like computer use. The agent, distinct from and superior to o3, is considered the first public exposure of what was internally called o4, now merged into GPTNext. It features end-to-end reinforcement learning, can operate for extended periods (tested up to 2 hours), and is classified as "High" risk for biological misuse, with safeguards activated. Early benchmarks show mixed results, excelling in some tests like WebArena and BrowserComp but underperforming on others like PaperBench. Key figures involved include Sam Altman, Greg Brockman, and Kevin Weil, with technical insights from xikun_zhang_ and risk commentary from KerenGu and boazbaraktcs. The launch sparked speculation about GPT-5, which was confirmed not to be the case.
  • Jul 16
    not much happened today
    kimi-k2 gpt-4.1 voxtral goedel-prover-v2 llama-3 mistral-ai moonshot-ai nous-research google-deepmind openai groq anthropic speech-recognition mixture-of-experts benchmarking dataset-release model-architecture theorem-proving reinforcement-learning asymmetry-of-verification inference-speed model-performance cline _jasonwei
    Mistral released Voxtral, claimed as the world's best open speech recognition models, available via API and Hugging Face. Moonshot AI launched Kimi K2, a trillion-parameter Mixture-of-Experts (MoE) model, outperforming GPT-4.1 on benchmarks with 65.4% on SWE-Bench Verified and achieving 200 tokens/second inference speed on Groq hardware. Nous Research open-sourced the Hermes 3 dataset with 1 million samples, aiding SOTA models on the Llama-3 series. Google DeepMind introduced the Mixture-of-Recursions (MoR) architecture promising 2x inference speed and 50% parameter reduction but faced skepticism. Goedel-Prover V2 topped the PutnamBench theorem proving benchmark. AtCoder World Finals saw a human winner with OpenAI placing second. Research highlights include Jason Wei's insights on reinforcement learning and the "Verifier's Law" emphasizing the asymmetry of verification in AI training.
  • Jul 15
    Voxtral - Mistral's SOTA ASR model in 3B (mini) and 24B ("small") sizes beats OpenAI Whisper large-v3
    voxtal-3b voxtal-24b kimi-k2 mistral-ai moonshot-ai groq together-ai deepinfra huggingface langchain transcription long-context function-calling multilingual-models mixture-of-experts inference-speed developer-tools model-integration jeremyphoward teortaxestex scaling01 zacharynado jonathanross321 reach_vb philschmid
    Mistral surprises with the release of Voxtral, a transcription model outperforming Whisper large-v3, GPT-4o mini Transcribe, and Gemini 2.5 Flash. Voxtral models (3B and 24B) support 32k token context length, handle audios up to 30-40 minutes, offer built-in Q&A and summarization, are multilingual, and enable function-calling from voice commands, powered by the Mistral Small 3.1 language model backbone. Meanwhile, Moonshot AI's Kimi K2, a non-reasoning Mixture of Experts (MoE) model built by a team of around 200 people, gains attention for blazing-fast inference on Groq hardware, broad platform availability including Together AI and DeepInfra, and local running on M4 Max 128GB Mac. Developer tool integrations include LangChain and Hugging Face support, highlighting Kimi K2's strong tool use capabilities.
  • Jul 14
    not much happened today
    kimi-k2 grok-4 gpt-5 gemini-2.5 gemini-embedding cognition windsurf moonshot-ai x-ai openai google stanfordnlp huggingface mixture-of-experts model-training model-performance fine-tuning benchmarking agentic-ai model-bugs embedding-models sama hardmaru jeremyphoward akhaliq teortaxestex yuchenj_uw demishassabis
    Cognition is acquiring the remaining assets of Windsurf after a significant weekend deal. Moonshot AI released Kimi K2, an open-source, MIT-licensed agentic model with 1 Trillion total / 32B active parameters using a Mixture-of-Experts architecture, trained on 15.5 Trillion tokens with the MuonClip optimizer, showing top performance on benchmarks like EQ-Bench and Creative Writing. xAI launched Grok-4, ranking 5th on IQ Bench but with notable quirks including a bug causing it to respond only with "Heavy" and a high frequency of Elon Musk mentions. Rumors about OpenAI delaying an open-source model release surfaced, with speculation about CEO sama's PR strategy and a possible GPT-5 launch in September. The Gemini 2.5 paper was released with 3,295 authors, and Google introduced its Gemini Embedding model, topping the MTEB leaderboard.
See all issues

Let's Connect

If you want to get in touch with me about something or just to say hi, reach out on social media or send me an email.

  • GitHub /
  • X (@smol_ai) /
  • swyx at smol dot ai
© 2025 • AINews
You can also subscribe by rss .
Press Esc or click anywhere to close