AINews
subscribe / issues / tags /

AINews

by smol.ai

How over 150k top AI Engineers keep up, every weekday.

We summarize top AI discords + AI reddits + AI X/Twitters, and send you a roundup each day!

"Highest-leverage 45 mins I spend everyday" - Soumith

" best AI newsletter atm " and " I'm not sure that enough people subscribe " - Andrej

"genuinely incredible" - Chris

"surprisingly decent" - Hamel

Thanks to Pieter Levels for the Lex Fridman feature!

Last 30 days in AI

Invalid regex
See all issues
  • Mar 16
    not much happened today
    kimi-linear-48b codex gpt-5.4 claude-code moonshot openai assemblyai langchain attention-mechanisms model-architecture inference-speed agent-feedback agent-skills multi-agent-systems knowledge-transfer cli-tools coding-agents model-deployment kimi_moonshot elonmusk yuchenj_uw nathancgy4 eliebakouch tokenbender behrouz_ali cloneofsimo fidjissimo sama gdb andrewyng itsafiz simplifyinai
    Moonshot's Attention Residuals paper introduced an input-dependent attention mechanism over prior layers with a 1.25x compute advantage and less than 2% inference latency overhead, validated on Kimi Linear 48B total / 3B active. The paper sparked debate on novelty versus prior art like DeepCrossAttention and Google’s earlier work, highlighting tensions in idea novelty, citation quality, and frontier-scale validation. OpenAI's Codex showed strong momentum with over 2M weekly active users, nearly 4x growth YTD, and GPT-5.4 hitting 5T tokens/day and a $1B annualized run-rate. Codex added subagents supporting multi-agent coding workflows. Infrastructure for coding agents matured with tools like Context Hub / chub supporting agent feedback loops, AssemblyAI's skill for Claude Code and Codex, and automated skill extraction from GitHub repos yielding 40% knowledge-transfer gains. LangChain launched LangGraph CLI and open-sourced Deep Agents, recreating top coding agent workflows with planning, filesystem ops, shell access, and sub-agents.
  • Mar 12
    not much happened today
    gpt-5.4 openai anthropic uber nous-research cursor_ai redisinc artificialanlys langchain-js agent-infrastructure mcp-protocol harnesses coding-agents evaluation-methodologies agent-ui-ux runtime-environments multi-axis-evaluation automation workflow-optimization open-agent-platforms provider-integration filesystem-checkpoints mattturck hwchase17 omarsar0 gergelyorosz htihle theprimeagen sydneyrunkle corbtt
    Harnesses, agent infrastructure, and the MCP protocol are central themes, with emphasis on how harnesses, sandboxes, filesystem access, skills, memory, and observability shape agent UI/UX and runtime environments. Despite jokes about MCP's demise, it remains vital in production, notably used internally by Uber and supported by Anthropic. The coding-agent stack is evolving with CursorBench combining offline and online metrics to evaluate models on intelligence and efficiency, where GPT-5.4 leads in correctness and token efficiency. Agent-assisted development is splitting between automation-heavy workflows and "stay-in-the-loop" tooling, with OpenAI advancing Codex Automations featuring worktree vs. branch choices and UI customization. The open agent platform Hermes Agent v0.2.0 introduces full MCP client support, ACP server for editors, and expanded provider integrations including OpenAI OAuth.
  • Mar 11
    not much happened today
    nemotron-3-super gpt-oss-120b qwen3.5-122b-a10b nvidia perplexity replit base44 vllm llama.cpp ollama togethercompute baseten wandb langchain unsloth model-architecture model-optimization inference-speed kv-cache multi-token-prediction agent-infrastructure orchestration persistent-agents model-serving product-launches karpathy ctnzr bnjmn_marie artificialanlys
    NVIDIA’s Nemotron 3 Super is a 120B parameter / ~12B active open model featuring a hybrid Mamba-Transformer / SSM Latent MoE architecture and 1M context window, delivering up to 2.2x faster inference than GPT-OSS-120B in FP4 with strong throughput gains. It supports agentic workloads and is unusually open with weights, data, and infrastructure details released. The model scored 36 on the AA Intelligence Index, outperforming GPT-OSS-120B but behind Qwen3.5-122B-A10B. Community and infrastructure support from projects like vLLM, llama.cpp, Ollama, Together, Baseten, W&B Inference, LangChain, and Unsloth GGUFs was immediate. Key technical innovations include native multi-token prediction (MTP) and a significant KV-cache efficiency advantage. On the product side, a shift towards persistent agent runtimes and orchestration layers is highlighted, with Andrej Karpathy advocating for a "bigger IDE" concept where agents replace files as the unit of work, enabling legible, forkable agentic organizations with real-time control. New launches fitting this vision include Perplexity’s Personal Computer, an always-on local/cloud hybrid running on Mac mini, and Computer for Enterprise orchestrating 20 specialized models and 400+ apps. Replit Agent 4 offers a collaborative, canvas-like workflow with parallel agents, while Base44 Superagents provide integrated solutions for nontechnical users. The engineering focus is increasingly on the orchestration harness rather than just the model.
  • Mar 10
    Yann LeCun’s AMI Labs launches with a $1.03B seed to build world models around JEPA
    Yann LeCun launched Advanced Machine Intelligence (AMI Labs) with a record $1.03B seed round at a $3.5B pre-money valuation, aiming to build AI models that understand the physical world through world models rather than just language prediction. The startup, based in Europe with locations in Paris and Zürich, is framed as a major milestone for European AI and backed by a prominent founding team including Alex Lebrun, Saining Xie, and Pascale Fung. The mission is described as a "long-term scientific endeavor" to create AI that "perceives, learns, reasons and acts" in the real world.
  • Mar 10
    not much happened today
    opus-4.6 glm-5 anthropic ibm perplexity-ai llamaindex deepseek google-chrome persistent-memory agent-infrastructure cross-device-synchronization long-context sparse-attention inference-optimization computer-architecture task-completion systems-performance pamelafox tadasayy llama_index bromann dair_ai omarsar0 abxxai teknuim bcherny kimmonismus _catwu alexalbert__ realyushibai
    MCP tools remain relevant for deterministic APIs despite ergonomic criticisms, with new web MCP support in Chrome v146 enabling continuous browsing agents. Persistent memory is emerging as a key differentiator for agents, with IBM improving task completion rates and multi-agent memory framed as a computer architecture challenge. Agent UX is evolving towards always-on, cross-device operation, exemplified by Perplexity Computer on iOS and Claude Code session management. Anthropic released Opus 4.6 1M context as default with no extra long-context API charges, achieving 78.3% on MRCR v2 at 1M tokens. Sparse attention optimizations like IndexCache in DeepSeek Sparse Attention yield significant speedups on large models with minimal code changes.
  • Mar 09
    Autoresearch: Sparks of Recursive Self Improvement
    claude-3 codex anthropic openai cognition automated-machine-learning coding-agents bug-fixing model-autonomy multi-agent-systems pr-review systems-engineering model-verification karpathy yi_tay jakub_pachocki
    RSI covers AI developments from 3/5/2026 to 3/9/2026, highlighting the emergence of LLMs autonomously training smaller LLMs, marking a significant "AutoML moment" in AI progress. Karpathy and Yi Tay discuss "vibe training," where AI models fix bugs and improve code autonomously, suggesting models may soon surpass human debugging efficiency. The report anticipates Jakub Pachocki's Automated AI Research Intern system by September 2026 to accelerate human researchers. On AI Twitter, the focus is on coding agents shifting bottlenecks from implementation to review and verification, with Anthropic's Claude Code Review improving PR review effectiveness significantly, and tools like OpenAI Codex Review and Cognition's Devin Review enhancing code review workflows. Harness engineering is evolving into systems engineering, emphasizing decoupling agent storage from compute for collaborative agent teams.
  • Mar 06
    not much happened today
    gpt-5.4 gpt-5.2 gemini-3.1-pro openai artificial-analysis gemini claude mit figma github benchmarking physics-reasoning agentic-coding hallucination-detection context-windows cost-efficiency agent-prompting scheduled-tasks loop-patterns ai-evaluation design-code-integration agent-orchestration open-source
    OpenAI rolled out GPT-5.4, achieving tied #1 on the Artificial Analysis Intelligence Index with Gemini 3.1 Pro Preview scoring 57 (up from 51 for GPT-5.2 xhigh). GPT-5.4 features a larger ~1.05M token context window and higher per-token prices ($2.50/$15 vs $1.75/$14 for GPT-5.2), with strengths in physics reasoning (CritPt) and agentic coding (TerminalBench Hard) but a higher hallucination rate and ~28% higher benchmark run cost. The GPT-5.4 Pro variant shows a +10 point jump on CritPt reaching 30% but at an extreme output token cost of $180 / 1M tokens. Community benchmarks show GPT-5.4 excels in agentic/coding tasks but mixed feedback on reasoning efficiency and literalness compared to Claude. OpenAI updated agent prompting guidance for GPT-5.4 API users, emphasizing tool use, structured outputs, and verification loops. Claude Code added local scheduled tasks and loop patterns for agents. The MCP framework is highlighted as a connective tissue for AI evaluation and design-code round-trips, with Truesight MCP enabling AI evaluation like unit testing and Figma MCP server supporting bidirectional design-code integration. Open-source T3 Code launched as an agent orchestration coding app built on Codex CLI.
  • Mar 05
    GPT 5.4: SOTA Knowledge Work -and- Coding -and- CUA Model, OpenAI is so very back
    gpt-5.4 gpt-5.4-pro openai cursor_ai perplexity_ai arena native-computer-use long-context efficiency steering benchmarking gpu-kernels attention-mechanisms algorithmic-optimization pipeline-optimization sama reach_vb scaling01 danshipper yuchenj_uw
    OpenAI launched GPT-5.4 and GPT-5.4 Pro with unified mainline and Codex models, featuring native computer use, up to ~1M token context, and efficiency improvements including a new Codex /fast mode. Benchmarks showed strong results like OSWorld-Verified 75.0% surpassing human baseline and GDPval 83% against industry pros. User feedback highlighted coding utility but raised concerns about pricing and overthinking. Integration with devtools like Cursor, Perplexity, and Arena was announced. In systems research, FlashAttention-4 (FA4) was introduced with near-matmul speed attention on Blackwell GPUs, featuring innovations like polynomial exp emulation and online softmax. "Steering mid-response" and "fewer tokens, faster speed" were emphasized as UX and efficiency improvements.
  • Mar 04
    not much happened today
    gemini-3.1-flash-lite gpt-5.4 claude-opus-4.6 qwen-3.5 qwen google-deepmind openai anthropic alibaba nvidia meta-ai-fair hugging-face model-positioning latency cost-efficiency context-window extreme-reasoning agentic-ai model-updates general-agent-behavior visual-mathematics leadership-exits organizational-restructuring compute-access research-workflows open-weight-models ecosystem-dependence demishassabis natolambert poezhao0605 simonw
    Gemini 3.1 Flash-Lite is highlighted by Demis Hassabis for its speed and cost-efficiency, focusing on latency and cost per capability rather than raw performance. NotebookLM Studio introduces a new feature for generating immersive cinematic video overviews. Rumors about GPT-5.4 suggest a ~1 million token context window and an "extreme reasoning mode" for long-horizon tasks, with speculation about monthly model updates from OpenAI. Anthropic's Claude Opus 4.6 is noted for strong general agent behavior but weaker visual mathematics performance. Alibaba's Qwen team faces leadership exits and restructuring, with concerns about compute access and organizational changes. Qwen models dominate research workflows, appearing in 41% of Hugging Face papers in 2025-2026, raising ecosystem dependence risks. The open-weight model landscape may consolidate around non-profits, NVIDIA, and Meta due to business incentives.
  • Mar 03
    not much happened today
    gemini-3.1-flash-lite gemini-3 gpt-5.3 gpt-5.4 qwen google-deepmind google openai alibaba multimodality latency throughput context-window model-pricing model-benchmarking model-performance conversational-ai hallucination-reduction api model-rollout leadership-exit jeffdean noamshazeer sundarpichai aidan_mclau justinlin610
    Google DeepMind launched Gemini 3.1 Flash-Lite, emphasizing dynamic thinking levels for adjustable compute, with notable metrics like $0.25/M input, $1.50/M output, 1432 Elo on LMArena, and 2.5× faster time-to-first-token than Gemini 2.5 Flash. It supports a 1M context window and high throughput for multimodal inputs including text, images, video, audio, and PDFs. OpenAI rolled out GPT-5.3 Instant to all ChatGPT users, improving conversational naturalness and reducing hallucinations by 26.8% with search. The upcoming GPT-5.4 was teased amid speculation. Alibaba's Qwen faces leadership exits, raising concerns about its future and open-source status. The news highlights advancements in model efficiency, pricing, and multimodality, alongside organizational changes impacting AI development.
  • Mar 02
    not much happened today
    qwen-3.5-0.8b qwen-3.5-2b qwen-3.5-4b qwen-3.5-9b codex-5.3 claude-3 alibaba ollama lm-studio openai anthropic multimodality reinforcement-learning long-context hybrid-attention on-device-ai model-deployment agent-reliability agent-observability coding-agents benchmarking runtime-optimization token-efficiency nrehiew_ kimmonismus lioronai danielhanchen theo htihle teortaxestex theprimeagen yuchenj_uw _lewtun saen_dev _philschmid omarsar0
    Alibaba released the Qwen 3.5 series with models ranging from 0.8B to 9B parameters, featuring native multimodality, scaled reinforcement learning, and targeting edge and lightweight agent deployments. The models support very long context windows up to 262K tokens (extendable to 1M) and use a novel Gated DeltaNet hybrid attention architecture combining linear and full attention layers. Deployment examples include Ollama and LM Studio, with a notable 6-bit on-device demo on iPhone 17 Pro. Evaluators are cautioned that reasoning is disabled by default on smaller models. In coding agents, Codex 5.3 shows promising benchmark results on WeirdML with 79.3% accuracy, though availability and downtime remain critical challenges, especially highlighted by Claude outages. Agent reliability and observability are emphasized as cross-functional problems requiring clear success criteria and practical evaluation strategies. Studies show that using AGENTS.md and SKILL.md guardrails can significantly reduce runtime and token usage by mitigating worst-case thrashing in coding workflows.
  • Feb 27
    OpenAI closes $110B raise from Amazon, NVIDIA, SoftBank in largest startup fundraise in history @ $840B post-money
    codex chatgpt openai softbank nvidia amazon microsoft model-scaling model-metrics investment cloud-computing infrastructure training-capacity user-growth partnerships sama
    OpenAI has closed a major funding round totaling $110 billion at a $730 billion pre-money valuation, with investments from SoftBank ($30B), NVIDIA ($30B), and Amazon ($50B). Key user metrics include 1.6 million weekly Codex users, over 9 million paying business users of ChatGPT, and more than 900 million weekly active ChatGPT users with 50 million consumer subscribers. The partnership with Amazon includes exclusive cloud services and 2 gigawatts of Trainium capacity. Microsoft maintains a reduced partnership with stateless APIs. This funding round is one of the largest in history, highlighting OpenAI's dominant position in AI adoption and infrastructure.
  • Feb 26
    Nano Banana 2 aka Gemini 3.1 Flash Image Preview: the new SOTA Imagegen model
    gemini-3.1-flash gpt-5.2 gpt-5.3-codex opus-4.6 claude google google-deepmind microsoft anthropic perplexity-ai image-generation text-rendering 3d-imaging real-time-information agentic-ai persistent-memory multi-agent-systems tooling coding-agents task-delegation sundarpichai demishassabis mustafasuleyman yusuf_i_mehdi borisdayma aravsrinivas
    Google and DeepMind launched Nano Banana 2 (aka Gemini 3.1 Flash Image Preview), a leading image generation and editing model integrated across multiple Google products with features like 4K upscaling, multi-subject consistency, and real-time search-conditioned generation. Evaluations rank it #1 in text-to-image tasks with competitive pricing. Additionally, advances in agentic coding are noted with models like GPT-5.2, GPT-5.3 Codex, Opus 4.6, and Gemini 3.1, alongside Microsoft's Copilot Tasks introducing task delegation. Persistent memory features are rolling out in Claude models, though interoperability challenges remain.
  • Feb 25
    Agentic Engineering: WTF Happened in December 2025?
    gpt-5.3-codex claude-code perplexity openai anthropic langchain-ai coding-agents agent-architecture distributed-workflows usage-based-pricing model-routing benchmarking context-length observability software-development karpathy aravsrinivas lioronai denisyarats swyx catwu hwchase17
    Perplexity launched Computer, an orchestration-first agent platform featuring multi-model routing, usage-based pricing, and parallel asynchronous sub-agents for distributed workflows. Andrej Karpathy claims a "phase change" in coding agents since December, highlighting sustained long-horizon task completion. OpenAI released GPT-5.3-Codex with ~25% speed improvements and strong benchmark performance, while Claude Code celebrates its first year with ecosystem integrations and scaling challenges. This marks a significant shift in coding workflows and agent-based software development.
  • Feb 24
    Anthropic accuses DeepSeek, Moonshot, and MiniMax of "industrial-scale distillation attacks".
    claude claude-3 codex claude-code anthropic deepseek moonshot-ai minimax openai ollama api-abuse-resistance model-security agentic-engineering coding-agents model-distillation workflow-automation sandboxing realtime-communication simon_willison
    Anthropic alleges industrial-scale distillation attacks on its Claude model by DeepSeek, Moonshot AI, and MiniMax, involving ~24,000 fraudulent accounts and >16M Claude exchanges to extract capabilities, raising concerns about competitive risks and safety. The community debates the difference between scraping and API-output extraction, highlighting a shift toward protecting models via API abuse resistance techniques. Meanwhile, coding agents like Codex and Claude Code see real adoption and failures, with emerging best practices in "agentic engineering" led by Simon Willison. The OpenClaw ecosystem expands with alternatives like NanoClaw and integrations such as Ollama 0.17 simplifying open model usage.
  • Feb 24
    Claude Code Anniversary + Launches from: Qwen 3.5, Cursor Demos, Cognition Devin 2.2, Inception Mercury 2
    qwen3.5-flash qwen3.5-35b-a3b qwen3.5-122b-a10b qwen3.5-27b qwen3.5-397b-a17b gpt-5.3-codex claude-code alibaba openai anthropic cursor huggingface model-architecture reinforcement-learning quantization context-windows agentic-ai api websockets software-ux enterprise-workflows model-deployment awnihannun andrew_n_carr justinlin610 unslothai terryyuezhuo haihaoshen 0xsero ali_tongyilab scaling01 gdb noahzweben _catwu
    Alibaba launched the Qwen 3.5 Medium Model Series featuring models like Qwen3.5-Flash, Qwen3.5-35B-A3B (MoE), and Qwen3.5-122B-A10B (MoE) emphasizing efficiency over scale with innovations like 1M context and INT4 quantization. OpenAI released GPT-5.3-Codex via the Responses API with enhanced file input support and faster web socket-based throughput. Anthropic introduced Claude Code Remote Control enabling terminal session continuation from mobile and expanded enterprise workflow features. Cursor shifted UX to agent demo videos instead of diffs, highlighting new interaction modes.
  • Feb 21
    not much happened today
    gemini-3.1-pro gpt-5.2 opus-4.6 sonnet-4.6 claude-opus-4.6 google-deepmind anthropic context-arena artificial-analysis epoch-ai scaling01 retrieval benchmarking evaluation-methodology token-limits cost-efficiency instruction-following software-reasoning model-reliability dillonuzar artificialanlys yuchenj_uw theo minimax_ai epochairesearch paul_cal scaling01 metr_evals idavidrein xlr8harder htihle arena
    Gemini 3.1 Pro demonstrates strong retrieval capabilities and cost efficiency compared to GPT-5.2 and Opus 4.6, though users report tooling and UI issues. The SWE-bench Verified evaluation methodology is under scrutiny for consistency, with updates bringing results closer to developer claims. Benchmarking debates arise over what frontier models truly measure, especially with ARC-AGI puzzles. Claude Opus 4.6 shows a noisy but notable 14.5-hour time horizon on software tasks, with token limits causing practical failures. Sonnet 4.6 improves significantly in code and instruction-following benchmarks, but user backlash grows due to product regressions.
  • Feb 19
    Gemini 3.1 Pro: 2x 3.0 on ARC-AGI 2
    gemini-3.1-pro gemini-3-deep-think google google-deepmind geminiapp reasoning benchmarking agentic-ai cost-efficiency hallucination code-generation model-release developer-tools sundarpichai demishassabis jeffdean koraykv noamshazeer joshwoodward artificialanlys arena oriolvinyalsml scaling01
    Google released Gemini 3.1 Pro, a developer preview integrated across the Gemini app, NotebookLM, Gemini API / AI Studio, and Vertex AI, highlighting a significant reasoning improvement with ARC-AGI-2 = 77.1% and strong coding and agentic-tool benchmarks like SWE-Bench Verified = 80.6%. Independent evaluators such as Artificial Analysis and Arena confirmed top-tier performance and cost efficiency, though community reactions included excitement about practical gains, skepticism about benchmark targeting, and concerns over rollout inconsistencies. The release emphasizes the same core intelligence powering Gemini 3 Deep Think scaled for practical use, with notable mentions from leaders like @sundarpichai, @demishassabis, and @JeffDean.
  • Feb 18
    not much happened today
    claude-4.6 claude-opus-4.6 claude-sonnet-4.6 qwen-3.5 qwen3.5-397b-a17b glm-5 gemini-3.1-pro minimax-m2.5 anthropic alibaba scaling01 arena artificial-analysis benchmarking token-efficiency ai-agent-autonomy reinforcement-learning asynchronous-learning model-performance open-weights reasoning software-engineering agentic-engineering eshear theo omarsar0 grad62304977 scaling01
    Anthropic released Claude Opus/Sonnet 4.6, showing a significant intelligence index jump but with increased token usage and cost. Anthropic also shared insights on AI agent autonomy, highlighting human-in-the-loop prevalence and software engineering tool calls. Alibaba launched Qwen 3.5 with discussions on reasoning efficiency and token bloat, plus open-sourced Qwen3.5-397B-A17B FP8 weights. The GLM-5 technical report introduced asynchronous agent reinforcement learning and compute-efficient techniques. Rumors about Gemini 3.1 Pro suggest longer reasoning capabilities, while MiniMax M2.5 appeared on community leaderboards. The community debates benchmark reliability and model performance nuances.
  • Feb 17
    Claude Sonnet 4.6: clean upgrade of 4.5, mostly better with some caveats
    claude-3-sonnet-4.6 claude-3-sonnet-4.5 claude-3-opus-4.5 claude-3-opus-4.6 anthropic cursor microsoft perplexity-ai cognition long-context agent-planning knowledge-work benchmarking tokenization model-integration code-execution model-updates aesthetic-quality alexalbert__ scaling01 rishdotblog claudeai kimmonismus artificialanlys
    Anthropic launched Claude Sonnet 4.6, an upgrade over Sonnet 4.5, featuring broad improvements in coding, long-context reasoning, agent planning, knowledge work, and design, plus a 1M-token context window (beta). Benchmarks show Sonnet 4.6 leading on GDPval-AA ELO 1633, with significant token usage increases and improved output aesthetics. Integrations include Cursor, Windsurf, Microsoft Foundry, and Perplexity Pro/Max. Early user feedback noted some regression issues that were later fixed. Pricing remains the same as Sonnet 4.5. Tooling enhancements include code execution for filtering results, improving accuracy and efficiency.
  • Feb 16
    Qwen3.5-397B-A17B: the smallest Open-Opus class, very efficient model
    qwen3.5-397b-a17b qwen3.5-plus qwen3-max qwen3-vl kimi alibaba openai deepseek z-ai minimax kimi unsloth ollama vllm native-multimodality spatial-intelligence sparse-moe long-context model-quantization model-architecture model-deployment inference-optimization apache-2.0-license pete_steinberger justinlin610
    Alibaba released Qwen3.5-397B-A17B, an open-weight model featuring native multimodality, spatial intelligence, and a hybrid linear attention + sparse MoE architecture supporting 201 languages and long context windows up to 256K tokens. The model shows improvements over previous versions like Qwen3-Max and Qwen3-VL, with a sparsity ratio of about 4.3%. Community discussions highlighted the Gated Delta Networks enabling efficient inference despite large model size (~800GB BF16), with successful local runs on Apple Silicon using quantization techniques. The hosted API version, Qwen3.5-Plus, supports 1M context and integrates search and code interpreter features. This release follows other Chinese labs like Z.ai, Minimax, and Kimi in refreshing large models. The model is licensed under Apache-2.0 and is expected to be the last major release before DeepSeek v4. The news also notes Pete Steinberger joining OpenAI.
  • Feb 13
    MiniMax-M2.5: SOTA coding, search, toolcalls, $1/hour
    minimax-m2.5 glm-5 minimax-ai togethercompute huggingface intel wandb reinforcement-learning agent-based-models model-quantization benchmarking model-efficiency multi-turn-dialogue infrastructure-optimization cost-efficiency on-device-ai
    MiniMax-M2.5 is now open source, featuring an "agent-native" reinforcement learning framework called Forge trained across 200k+ RL environments for coding, tool use, and workflows. It boasts strong benchmark scores like 80.2% SWE-Bench Verified and emphasizes cost-efficiency with claims like "$1 per hour at 100 tps" and good on-device performance. The Forge RL system uses multi-level prefix caching and high rollout compute share (~60%) to generate millions of trajectories daily. Independent reviews note improved stability and multi-turn viability but high token usage. The ecosystem rapidly adopted MiniMax-M2.5 with quantized releases including 2-bit GGUF and INT4 formats. Meanwhile, Together markets GLM-5 as a leading open-source model for long-horizon agents with 77.8% SWE-Bench Verified and MoE efficiency using DeepSeek Sparse Attention.
  • Feb 12
    new Gemini 3 Deep Think, Anthropic $30B @ $380B, GPT-5.3-Codex Spark, MiniMax M2.5
    gemini-3-deep-think-v2 arc-agi-2 google-deepmind google geminiapp arcprize benchmarking reasoning test-time-adaptation fluid-intelligence scientific-computing engineering-workflows 3d-modeling cost-analysis demishassabis sundarpichai fchollet jeffdean oriolvinyalsml tulseedoshi
    Google DeepMind is rolling out the upgraded Gemini 3 Deep Think V2 reasoning mode to Google AI Ultra subscribers and opening early access to the Vertex AI / Gemini API for select users. Key benchmark achievements include ARC-AGI-2 at 84.6%, Humanity’s Last Exam (HLE) at 48.4% without tools, and a Codeforces Elo of 3455, showcasing Olympiad-level performance in physics and chemistry. The mode emphasizes practical scientific and engineering applications such as error detection in math papers, physical system modeling, semiconductor optimization, and a sketch to CAD/STL pipeline for 3D printing. ARC benchmark creator François Chollet highlights the benchmark's role in advancing test-time adaptation and fluid intelligence, projecting human-AI parity around 2030. This rollout is framed as a productized, compute-heavy test-time mode rather than a lab demo, with cost disclosures for ARC tasks provided.
  • Feb 11
    Z.ai GLM-5: New SOTA Open Weights LLM
    glm-5 glm-4.5 kimi-k2.5 zhipu-ai openrouter modal deepinfra ollama qoder vercel deepseek-sparse-attention long-context model-scaling pretraining benchmarking office-productivity context-window model-deployment cost-efficiency
    Zhipu AI launched GLM-5, an Opus-class model scaling from 355B to 744B parameters with DeepSeek Sparse Attention integration for cost-efficient long-context serving. GLM-5 achieves SOTA on BrowseComp and leads on Vending Bench 2, focusing on office productivity tasks and surpassing Kimi K2.5 on the GDPVal-AA benchmark. Despite broad availability on platforms like OpenRouter, Modal, DeepInfra, and Ollama Cloud, GLM-5 faces compute constraints impacting rollout and pricing. The model supports up to 200K context length and 128K max output tokens.
  • Feb 10
    Qwen-Image 2.0 and Seedance 2.0
    gpt-5.2 gpt-5.3-codex claude-opus-4.6 gemini-3-pro qwen-image-2.0 seedance-2.0 openai langchain-ai anthropic google-deepmind mistral-ai alibaba bytedance moonshot agentic-sandboxes multi-model-orchestration server-side-compaction coding-agent-ux long-running-agents model-release text-to-video image-generation parallel-execution funding git-compatible-database token-efficiency workflow-optimization hwchase17 nabbilkhan sydneyrunkle joecuevasjr pierceboggan reach_vb gdb ashtom
    OpenAI advances its Responses API for multi-hour agent workflows with features like server-side compaction, hosted containers, and Skills API, alongside upgrading Deep Research to GPT-5.2 and adding connectors. Discussions around sandbox design highlight a shift towards sandbox-as-a-tool architectures, with LangChain enhancing its deepagents v0.4 with pluggable sandbox backends. Coding agent UX evolves with multi-model orchestration involving Claude Opus 4.6, GPT-5.3-Codex, and Gemini 3 Pro. EntireHQ raised $60M seed funding for a Git-compatible database capturing code intent and agent context. In model releases, Alibaba Qwen launched Qwen-Image-2.0 emphasizing 2K resolution and 1K-token prompts for unified generation and editing. ByteDance's Seedance 2.0 marks a significant leap in text-to-video quality, while Moonshot's Kimi introduces an Agent Swarm with up to 100 sub-agents and 4.5× faster parallel execution.
  • Feb 09
    not much happened today
    gpt-5.3-codex claude-opus-4.6 openai anthropic cursor_ai github microsoft builder-tooling cybersecurity api-access model-rollout agentic-ai long-context serving-economics throughput-latency token-efficiency workflow-design sama pierceboggan kylebrussell natolambert omarsar0 sam_altman
    OpenAI launched GPT-5.3-Codex with a Super Bowl ad emphasizing "You can just build things" as a product strategy, focusing on builder tooling over chat interfaces. The model is rolling out across Cursor, VS Code, and GitHub with phased API access and is flagged as their first "high cybersecurity capability" model. Sam Altman reported over 1M Codex app downloads in the first week and strong weekly user growth. Meanwhile, Anthropic's Claude Opus 4.6 is recognized as a leading "agentic generalist" model, topping text and code leaderboards but noted for high token usage. Discussions around serving economics and "fast mode" behavior highlight practical deployment considerations. Additionally, Recursive Language Models (RLMs) introduce a novel approach using a second programmatic context space to extend long-context capabilities.
  • Feb 06
    not much happened today
    gpt-5.3-codex claude-opus-4.6 nanochat-gpt-2 openai anthropic langchain agent-systems ai-engineering benchmarking software-organization sandboxing tracing state-management recursive-language-models context-management karpathy sama swyx omarsar0 hamelhusain deepfates
    AI News for early February 2026 highlights a detailed comparison between GPT-5.3-Codex and Claude Opus 4.6, with users noting Codex's strength in detailed scoped tasks and Opus's ergonomic advantage for exploratory work. Benchmarks on Karpathy's nanochat GPT-2 speedrun show Opus 4.6 achieving better wall-clock performance, while Codex-5.3-xhigh sometimes suffers from context issues. Karpathy cautions that current models are not yet reliable for fully autonomous AI engineering. Discussions on agent swarms reveal emerging parallels to software organizational design, with Anthropic-style agent coordination systems and LangChain/LangSmith emphasizing environment engineering through tracing, sandboxing, and state control. The concept of Recursive Language Models (RLM) is introduced as a future direction for agent systems to reduce context rot and improve structured communication.
  • Feb 05
    OpenAI and Anthropic go to war: Claude Opus 4.6 vs GPT 5.3 Codex
    gpt-5.3-codex opus-4.6 openai anthropic nvidia agentic-coding long-context token-efficiency inference-speed hardware-software-co-design agent-platforms benchmarking software-development compiler-construction
    OpenAI launched GPT-5.3-Codex, emphasizing token efficiency, inference speed, and hardware/software co-design with GB200-NVL72 and NVIDIA collaboration. The new Frontier agent platform supports business-context agents with execution environments and learning capabilities. Anthropic showcased Opus 4.6 agent teams autonomously building a clean-room C compiler booting Linux, highlighting advances in agentic coding and long-context capabilities. Community benchmarks report 2.93× faster inference and significant efficiency gains, signaling a shift away from infinite compute budgets in 2026.
  • Feb 04
    ElevenLabs $500m Series D at $11B, Cerebras $1B Series H at $23B, Vibe Coding -> Agentic Engineering
    gemini-3 claude codex google openai github microsoft deepmind agent-frameworks model-deployment benchmarking cost-optimization software-development async-processing gpu-acceleration coding-agents user-adoption game-theory workflow-integration sama sundarpichai reach_vb
    Google's Gemini 3 is being integrated widely, including a new Chrome side panel and Nano Banana UX features, with rapid adoption and a 78% unit-cost reduction in serving costs. The Gemini app reached 750M+ MAU in Q4 2025, nearing ChatGPT's user base. Google is also benchmarking AI "soft skills" through games like Poker and Chess in the Kaggle Game Arena. Meanwhile, coding agents are converging in IDEs: VS Code launched Agent Sessions supporting Claude and Codex agents with features like parallel subagents and integrated browsers. GitHub Copilot now allows agent choice between Claude and OpenAI Codex for async backlog clearing. OpenAI reports 1M+ active users for Codex with expanded integration surfaces, though some users request better GPU support. The coding-agent ecosystem is professionalizing with community platforms like OpenClaw and tooling such as ClawHub and CLI updates. "Gemini 3 adoption faster than any other model" and "VS Code as home for coding agents" highlight major industry shifts.
  • Feb 03
    Context Graphs: Hype or actually Trillion-dollar opportunity?
    glm-ocr qwen3-coder-next sera-14b zhipu-ai lmsys vllm novita-labs ollama alibaba allenai cognition cursor multimodality ocr long-context mixture-of-experts agentic-coding-models context-graphs benchmarking model-deployment model-optimization model-training jaya_gupta dharmesh_shah
    Zhipu AI launched GLM-OCR, a lightweight 0.9B multimodal OCR model excelling in complex document understanding with top benchmark scores and day-0 deployment support from lmsys, vllm, and novita labs. Ollama enabled local-first usage with easy offline operation. Alibaba released Qwen3-Coder-Next, an 80B MoE model with only 3B active parameters, designed for coding agents with a massive 256K context window and trained on 800K verifiable tasks, achieving over 70% SWE-Bench Verified. The open coding ecosystem also saw Allen AI announce SERA-14B, an on-device-friendly coding model with new datasets. The emerging concept of Context Graphs was highlighted as a promising framework for data and agent traceability, with initiatives like Cursor's Agent Trace specifying context graphs for coding agents, emphasizing potential improvements in agent performance and customer-driven adoption. This coverage reflects ongoing innovation in multimodality, long-context, mixture-of-experts, and agentic coding models.
See all issues

Let's Connect

If you want to get in touch with me about something or just to say hi, reach out on social media or send me an email.

  • GitHub /
  • X (@smol_ai) /
  • swyx at smol dot ai
© 2026 • AINews
You can also subscribe by rss .
Press Esc or click anywhere to close