AINews
subscribe / issues / tags /

AINews

by smol.ai

How over 80k top AI Engineers keep up, every weekday.

Thanks for subscribing! Please check your email to confirm your subscription.

We respect your privacy. Full signup link here.

We summarize top AI discords + AI reddits + AI X/Twitters, and send you a roundup each day!

"Highest-leverage 45 mins I spend everyday" - Soumith

" best AI newsletter atm " and " I'm not sure that enough people subscribe " - Andrej

"genuinely incredible" - Chris

"surprisingly decent" - Hamel

You can pay for a customizable version here . Thanks to Pieter Levels for the Lex Fridman feature!

Last 30 days in AI

Invalid regex
See all issues
  • Oct 30
    not much happened today
    kimi-linear kimi-delta-attention minimax-m2 looped-llms aardvark-gpt-5 moonshot-ai minimax bytedance princeton mila openai cursor cognition hkust long-context attention-mechanisms agentic-ai tool-use adaptive-compute coding-agents performance-optimization memory-optimization reinforcement-learning model-architecture kimi_moonshot scaling01 uniartisan omarsar0 aicodeking songlinyang4 iscienceluvr nrehiew_ gdb embeddedsec auchenberg simonw
    Moonshot AI released Kimi Linear (KDA) with day-0 infrastructure and strong long-context metrics, achieving up to 75% KV cache reduction and 6x decoding throughput. MiniMax M2 pivoted to full attention for multi-hop reasoning, maintaining strong agentic coding performance with 200k context and ~100 TPS. ByteDance, Princeton, and Mila introduced Looped LLMs showing efficiency gains comparable to larger transformers. OpenAI's Aardvark (GPT-5) entered private beta as an agentic security researcher for scalable vulnerability discovery. Cursor launched faster cloud coding agents, though transparency concerns arose regarding base-model provenance. Cognition released a public beta for a desktop/mobile tool-use agent named Devin. The community discussed advanced attention mechanisms and adaptive compute techniques.
  • Oct 30
    not much happened today
    Poolside raised $1B at a $12B valuation. Eric Zelikman raised $1B after leaving Xai. Weavy joined Figma. New research highlights FP16 precision reduces training-inference mismatch in reinforcement-learning fine-tuning compared to BF16. Kimi AI introduced a hybrid KDA (Kimi Delta Attention) architecture improving long-context throughput and RL stability, alongside a new Kimi CLI for coding with agent protocol support. OpenAI previewed Agent Mode in ChatGPT enabling autonomous research and planning during browsing.
  • Oct 29
    Cursor 2.0 & Composer-1: Fast Models and New Agents UI
    composer-1 gpt-oss-safeguard-20b gpt-oss-safeguard-120b gpt-oss gpt-5-mini cursor_ai openai huggingface ollama cerebras groq goodfireai rakuten agentic-coding reinforcement-learning mixture-of-experts fine-tuning policy-classification open-weight-models inference-stacks cost-efficiency multi-agent-systems ide voice-to-code code-review built-in-browser model-optimization sasha_rush dan_shipper samkottler ellev3n11 swyx
    Cursor 2.0 launched with Composer-1, an agentic coding model optimized for speed and precision, featuring multi-agent orchestration, built-in browser for testing, and voice-to-code capabilities. OpenAI released gpt-oss-safeguard models (20B, 120B) for policy-based safety classification, open-weight and fine-tuned from gpt-oss, available on Hugging Face and supported by inference stacks like Ollama and Cerebras. Goodfire and Rakuten demonstrated sparse autoencoders for PII detection matching gpt-5-mini accuracy at significantly lower cost. The Cursor 2.0 update also includes a redesigned interface for managing multiple AI coding agents, marking a major advancement in AI IDEs. "Fast-not-slowest" tradeoff emphasized by early users for Composer-1, enabling rapid iteration with human-in-the-loop.
  • Oct 28
    OpenAI completes Microsoft + For-profit restructuring + announces 2028 AI Researcher timeline + Platform / AI cloud product direction + next $1T of compute
    OpenAI has completed a major recapitalization and restructuring, forming a Public Benefit Corporation with a non-profit Foundation holding special voting rights and equity valued at $130B. Microsoft holds about 27% diluted ownership and committed to $250B in Azure spend, losing exclusivity on compute but retaining Azure API exclusivity until AGI is declared. The compute infrastructure deals for 2025 total 30GW worth $1.4T, with OpenAI aiming to build 1GW per week at $20B per GW, projecting $3-4 trillion infrastructure by 2033. The company is shifting focus from first-party apps to a platform approach, emphasizing ecosystem growth and third-party development. Sam Altman and Sama are key figures in this transition, with significant financial and strategic implications for AI industry partnerships, including openness to Anthropic and Google Gemini on Azure.
  • Oct 27
    MiniMax M2 230BA10B — 8% of Claude Sonnet's price, ~2x faster, new SOTA open model
    minimax-m2 hailuo-ai huggingface baseten vllm modelscope openrouter cline sparse-moe model-benchmarking model-architecture instruction-following tool-use api-pricing model-deployment performance-evaluation full-attention qk-norm gqa rope reach_vb artificialanlys akhaliq eliebakouch grad62304977 yifan_zhang_ zpysky1125
    MiniMax M2, an open-weight sparse MoE model by Hailuo AI, launches with ≈200–230B parameters and 10B active parameters, offering strong performance near frontier closed models and ranking #5 overall on the Artificial Analysis Intelligence Index v3.0. It supports coding and agent tasks, is licensed under MIT, and is available via API at competitive pricing. The architecture uses full attention, QK-Norm, GQA, partial RoPE, and sigmoid routing, with day-0 support in vLLM and deployment on platforms like Hugging Face and Baseten. Despite verbosity and no tech report, it marks a significant win for open models.
  • Oct 24
    not much happened today
    nemotron-nano-2 gpt-oss-120b qwen3 llama-3 minimax-m2 glm-4.6-air gemini-2.5-flash gpt-5.1-mini tahoe-x1 vllm_project nvidia mistral-ai baseten huggingface thinking-machines deeplearningai pytorch arena yupp-ai zhipu-ai scaling01 stanford transformer-architecture model-optimization inference distributed-training multi-gpu-support performance-optimization agents observability model-evaluation reinforcement-learning model-provenance statistical-testing foundation-models cancer-biology model-fine-tuning swyx dvilasuero _lewtun clementdelangue zephyr_z9 skylermiao7 teortaxestex nalidoust
    vLLM announced support for NVIDIA Nemotron Nano 2, featuring a hybrid Transformer–Mamba design and tunable "thinking budget" enabling up to 6× faster token generation. Mistral AI Studio launched a production platform for agents with deep observability. Baseten reported high throughput (650 TPS) for GPT-OSS 120B on NVIDIA hardware. Hugging Face InspectAI added inference provider integration for cross-provider evaluation. Thinking Machines Tinker abstracts distributed fine-tuning for open-weight LLMs like Qwen3 and Llama 3. In China, MiniMax M2 shows competitive performance with top models and is optimized for agents and coding, while Zhipu GLM-4.6-Air focuses on reliability and scaling for coding tasks. Rumors suggest Gemini 2.5 Flash may be a >500B parameter MoE model, and a possible GPT-5.1 mini reference appeared. Outside LLMs, Tahoe-x1 (3B) foundation model achieved SOTA in cancer cell biology benchmarks. Research from Stanford introduces a method to detect model provenance via training-order "palimpsest" with strong statistical guarantees.
  • Oct 23
    not much happened today
    gemini-1.5-pro claude-3 chatgpt langchain meta-ai-fair hugging-face openrouter google-ai microsoft openai anthropic agent-ops observability multi-turn-evaluation reinforcement-learning distributed-training api model-stability user-intent-clustering software-development project-management code-generation hwchase17 ankush_gola11 whinthorn koylanai _lewtun bhutanisanyam1 thom_wolf danielhanchen cline canvrno pashmerepat mustafasuleyman yusuf_i_mehdi jordirib1 fidjissimo bradlightcap mikeyk alexalbert__
    LangSmith launched the Insights Agent with multi-turn evaluation for agent ops and observability, improving failure detection and user intent clustering. Meta PyTorch and Hugging Face introduced OpenEnv, a Gymnasium-style API and hub for reproducible agentic environments supporting distributed training. Discussions highlighted the importance of provider fidelity in agent coding, with OpenRouter's exacto filter improving stability. Builder UX updates include Google AI Studio's Annotation mode for Gemini code changes, Microsoft's Copilot Mode enhancements in Edge, and OpenAI's Shared Projects and Company Knowledge features for ChatGPT Business. Claude added project-scoped Memory. In reinforcement learning, Meta's ScaleRL proposes a methodology to predict RL scaling outcomes for LLMs with improved efficiency and stability.
  • Oct 22
    not much happened today
    vllm chatgpt-atlas langchain meta microsoft openai pytorch ray claude agent-frameworks reinforcement-learning distributed-computing inference-correctness serving-infrastructure browser-agents security middleware runtime-systems documentation hwchase17 soumithchintala masondrxy robertnishihara cryps1s yuchenj_uw
    LangChain & LangGraph 1.0 released with major updates for reliable, controllable agents and unified docs, emphasizing "Agent Engineering." Meta introduced PyTorch Monarch and TorchForge for distributed programming and reinforcement learning, enabling large-scale agentic systems. Microsoft Learn MCP server now integrates with tools like Claude Code and VS Code for instant doc querying, accelerating grounded agent workflows. vLLM improved inference correctness with token ID returns and batch-invariant inference, collaborating with Ray for orchestration in PyTorch Foundation. OpenAI launched ChatGPT Atlas, a browser agent with contextual Q&A and advanced safety features, though early users note maturity challenges and caution around credential access.
  • Oct 21
    ChatGPT Atlas: OpenAI's AI Browser
    gemini atlas openai google langchain ivp capitalg sapphire sequoia benchmark agent-mode browser-memory chromium finetuning moe lora agent-runtime observability software-development funding kevinweil bengoodger fidjissimo omarsar0 yuchenj_uw nickaturley raizamrtn hwchase17 bromann casper_hansen_ corbtt
    OpenAI launched the Chromium fork AI browser Atlas for macOS, featuring integrated Agent mode and browser memory with local login capabilities, aiming to surpass Google's Gemini in Chrome. The launch received mixed reactions regarding reliability and privacy. LangChain raised a $125M Series B at a $1.25B valuation, releasing v1.0 agent engineering stack with significant adoption including 85M+ OSS downloads/month and usage by ~35% of the Fortune 500. The ecosystem also saw updates like vLLM's MoE LoRA expert finetuning support.
  • Oct 20
    DeepSeek-OCR finds vision models can decode 10x more efficiently with ~97% accuracy of text-only, 33/200k pages/day/A100
    deepseek-ocr deepseek3b-moe-a570m veo-3.1 deepseek-ai google-deepmind krea ocr vision multimodality model-compression long-context model-architecture video-generation autoregressive-models model-efficiency precision-editing karpathy teortaxestex reach_vb _akhaliq eliebakouch vikhyatk demishassabis
    As ICCV 2025 begins, DeepSeek releases a novel DeepSeek-OCR 3B MoE vision-language model that compresses long text as visual context with high accuracy and efficiency, challenging traditional tokenization approaches. The model achieves ~97% decoding precision at <10× compression and processes up to ~33M pages/day on 20 A100-40G nodes, outperforming benchmarks like GOT-OCR2.0. Discussions highlight the potential for unlimited context windows and tokenization-free inputs, with contributions from @karpathy, @teortaxesTex, and others. In video generation, google-deepmind's Veo 3.1 leads community benchmarks with advanced precision editing and scene blending, while Krea open-sources a 14B autoregressive video model enabling realtime long-form generation at ~11 FPS on a single B200 GPU.
  • Oct 17
    The Karpathy-Dwarkesh Interview delays AGI timelines
    claude-haiku-4.5 gpt-5 arch-router-1.5b anthropic openai huggingface langchain llamaindex google epoch-ai reasoning long-context sampling benchmarking data-quality agent-frameworks modular-workflows ide-extensions model-routing graph-first-agents real-world-grounding karpathy aakaran31 du_yilun giffmana omarsar0 jeremyphoward claude_code mikeyk alexalbert__ clementdelangue jerryjliu0
    The recent AI news highlights the Karpathy interview as a major event, alongside significant discussions on reasoning improvements without reinforcement learning, with test-time sampling achieving GRPO-level performance. Critiques on context window marketing reveal effective limits near 64K tokens, with Claude Haiku 4.5 showing competitive reasoning speed. GPT-5 struggles with advanced math benchmarks, and data quality issues termed "Brain Rot" affect model reasoning and safety. In agent frameworks, Anthropic Skills enable modular coding workflows, OpenAI Codex IDE extensions enhance developer productivity, and HuggingChat Omni introduces meta-routing across 100+ open models using Arch-Router-1.5B. LangChain and LlamaIndex advance graph-first agent infrastructure, while Google Gemini integrates with Google Maps for real-world grounding.
  • Oct 16
    Claude Agent Skills - glorified AGENTS.md? or MCP killer?
    claude-4.5-haiku claude chatgpt huggingchat-omni anthropic openai microsoft perplexity-ai huggingface groq cerebras togethercompute agent-skills document-processing long-context reasoning multi-model-routing memory-management voice vision simonwillison alexalbert__ mustafasuleyman yusuf_i_mehdi aravsrinivas
    Anthropic achieves a rare feat with back-to-back AI news headlines featuring Claude's new Skills—a novel way to build specialized agents using Markdown files, scripts, and metadata to handle tasks like creating and reading PDFs, Docs, and PPTs. Simon Willison calls this a "bigger deal than MCP," predicting a "Cambrian explosion in Skills." Meanwhile, Anthropic launches Claude 4.5 Haiku with strong reasoning and long-context capabilities, priced competitively. Other updates include OpenAI's ChatGPT memory management improvements, Windows 11 Copilot voice and vision features, and HuggingChat Omni routing across 115 open-source models from 15 providers. These developments highlight advances in agent skills, document processing, long-context reasoning, and multi-model routing.
  • Oct 15
    Claude Haiku 4.5
    claude-3.5-sonnet claude-3-haiku claude-3-haiku-4.5 gpt-5 gpt-4.1 gemma-2.5 gemma o3 anthropic google yale artificial-analysis shanghai-ai-lab model-performance fine-tuning reasoning agent-evaluation memory-optimization model-efficiency open-models cost-efficiency foundation-models agentic-workflows swyx sundarpichai osanseviero clementdelangue deredleritt3r azizishekoofeh vikhyatk mirrokni pdrmnvd akhaliq sayashk gne
    Anthropic released Claude Haiku 4.5, a model that is over 2x faster and 3x cheaper than Claude Sonnet 4.5, improving iteration speed and user experience significantly. Pricing comparisons highlight Haiku 4.5's competitive cost against models like GPT-5 and GLM-4.6. Google and Yale introduced the open-weight Cell2Sentence-Scale 27B (Gemma) model, which generated a novel, experimentally validated cancer hypothesis, with open-sourced weights for community use. Early evaluations show GPT-5 and o3 models outperform GPT-4.1 in agentic reasoning tasks, balancing cost and performance. Agent evaluation challenges and memory-based learning advances were also discussed, with contributions from Shanghai AI Lab and others. "Haiku 4.5 materially improves iteration speed and UX," and "Cell2Sentence-Scale yielded validated cancer hypothesis" were key highlights.
  • Oct 14
    not much happened today
    qwen3-vl-4b qwen3-vl-8b qwen2.5-vl-72b deepseek-v3.1 alibaba arena runway nvidia togethercompute ollama model-optimization fine-tuning inference-speed video-generation diffusion-models representation-learning local-ai speculative-decoding fp8-quantization context-windows karpathy
    Alibaba released compact dense Qwen3-VL models at 4B and 8B sizes with FP8 options, supporting up to 1M context and open vocabulary detection, rivaling larger models like Qwen2.5-VL-72B. Ecosystem support includes MLX-VLM, LM Studio, vLLM, Kaggle models, and Ollama Cloud. In video AI, Arena added Sora 2 models leading in video benchmarks, with Higgsfield Enhancer improving video quality. Runway launched domain-specific workflow apps for creative tasks. Research on Representation Autoencoders for DiTs (RAE-DiT) shows improved diffusion model performance. On local training, NVIDIA DGX Spark enables strong local fine-tuning, while Nanochat by Karpathy offers a minimal stack for training and inference. Together AI introduced ATLAS, a speculative decoding method achieving up to 4× faster inference on DeepSeek-V3.1. These developments highlight advances in efficient model deployment, video AI, local fine-tuning, and inference speed optimization.
  • Oct 13
    OpenAI Titan XPU: 10GW of self-designed chips with Broadcom
    llama-3-70b openai nvidia amd broadcom inferencemax asic inference compute-infrastructure chip-design fp8 reinforcement-learning ambient-agents custom-accelerators energy-consumption podcast gdb
    OpenAI is finalizing a custom ASIC chip design to deploy 10GW of inference compute, complementing existing deals with NVIDIA (10GW) and AMD (6GW). This marks a significant scale-up from OpenAI's current 2GW compute, aiming for a roadmap of 250GW total, which is half the energy consumption of the US. Greg from OpenAI highlights the shift of ChatGPT from interactive use to always-on ambient agents requiring massive compute, emphasizing the challenge of building chips for billions of users. The in-house ASIC effort was driven by the need for tailored designs after limited success influencing external chip startups. Broadcom's stock surged 10% on the news. Additionally, InferenceMAX reports improved ROCm stability and nuanced performance comparisons between AMD MI300X and NVIDIA H100/H200 on llama-3-70b FP8 workloads, with RL training infrastructure updates noted.
  • Oct 10
    not much happened today
    gpt-5-pro gemini-2.5 vllm deepseek-v3.1 openai google-deepmind microsoft epoch-ai-research togethercompute nvidia mila reasoning reinforcement-learning inference speculative-decoding sparse-attention kv-cache-management throughput-optimization compute-efficiency tokenization epochairesearch yitayml _philschmid jiqizhixin cvenhoff00 neelnanda5 lateinteraction mgoin_ blackhc teortaxestex
    FrontierMath Tier 4 results show GPT-5 Pro narrowly outperforming Gemini 2.5 Deep Think in reasoning accuracy, with concerns about problem leakage clarified by Epoch AI Research. Mila and Microsoft propose Markovian Thinking to improve reasoning efficiency, enabling models to reason over 24K tokens with less compute. New research suggests base models inherently contain reasoning mechanisms, with "thinking models" learning to invoke them effectively. In systems, NVIDIA Blackwell combined with vLLM wins InferenceMAX with significant throughput gains, while Together AI's ATLAS adaptive speculative decoding achieves 4× speed improvements and reduces RL training time by over 60%. SparseServe introduces dynamic sparse attention with KV tiering, drastically improving throughput and latency in GPU memory management.
  • Oct 09
    Air Street's State of AI 2025 Report
    glm-4.6 jamba-1.5 rnd1 claude-code reflection mastra datacurve spellbook kernel figure softbank abb radicalnumerics zhipu-ai ai21-labs anthropic humanoid-robots mixture-of-experts diffusion-models open-weight-models reinforcement-learning benchmarking small-language-models plugin-systems developer-tools agent-stacks adcock_brett achowdhery clementdelangue
    Reflection raised $2B to build frontier open-weight models with a focus on safety and evaluation, led by a team with backgrounds from AlphaGo, PaLM, and Gemini. Figure launched its next-gen humanoid robot, Figure 03, emphasizing non-teleoperated capabilities for home and large-scale use. Radical Numerics released RND1, a 30B-parameter sparse MoE diffusion language model with open weights and code to advance diffusion LM research. Zhipu posted strong results with GLM-4.6 on the Design Arena benchmark, while AI21 Labs' Jamba Reasoning 3B leads tiny reasoning models. Anthropic introduced a plugin system for Claude Code to enhance developer tools and agent stacks. The report also highlights SoftBank's acquisition of ABB's robotics unit for $5.4B and the growing ecosystem around open frontier modeling and small-model reasoning.
  • Oct 08
    not much happened today
    7m-tiny-recursive-model jamba-reasoning-3b qwen3-omni qwen-image-edit-2509 colbert-nano agentflow samsung lecuun ai21-labs alibaba coreweave weights-biases openpipe stanford recursive-reasoning density-estimation multimodality long-context retrieval serverless-reinforcement-learning agentic-systems model-efficiency reinforcement-learning transformers rasbt jm_alexia jiqizhixin randall_balestr corbtt shawnup _akhaliq
    Samsung's 7M Tiny Recursive Model (TRM) achieves superior reasoning on ARC-AGI and Sudoku with fewer layers and MLP replacing self-attention. LeCun's team introduces JEPA-SCORE, enabling density estimation from encoders without retraining. AI21 Labs releases Jamba Reasoning 3B, a fast hybrid SSM-Transformer model supporting up to 64K context tokens. Alibaba's Qwen3 Omni/Omni Realtime offers a unified audio-video-text model with extensive language and speech support, outperforming Gemini 2.0 Flash on BigBench Audio. Alibaba also debuts Qwen Image Edit 2509, a top open-weight multi-image editing model. ColBERT Nano models demonstrate effective retrieval at micro-scale parameter sizes. In reinforcement learning, CoreWeave, Weights & Biases, and OpenPipe launch serverless RL infrastructure reducing costs and speeding training. Stanford's AgentFlow presents an in-the-flow RL system with a 7B backbone outperforming larger models on agentic tasks. This update highlights advances in recursive reasoning, density estimation, multimodal architectures, long-context modeling, retrieval, and serverless reinforcement learning.
  • Oct 07
    Gemini 2.5 Computer Use preview beats Sonnet 4.5 and OAI CUA
    gemini-2.5 gpt-5-pro glm-4.6 codex google-deepmind openai microsoft anthropic zhipu-ai llamaindex mongodb agent-frameworks program-synthesis security multi-agent-systems computer-use-models open-source moe developer-tools workflow-automation api vision reasoning swyx demishassabis philschmid assaf_elovic hwchase17 jerryjliu0 skirano fabianstelzer blackhc andrewyng
    Google DeepMind released a new Gemini 2.5 Computer Use model for browser and Android UI control, evaluated by Browserbase. OpenAI showcased GPT-5 Pro, new developer tools including Codex with Slack integration, and agent-building SDKs at Dev Day. Google DeepMind's CodeMender automates security patching for large codebases. Microsoft introduced an open-source Agent Framework for multi-agent enterprise systems. AI community discussions highlight agent orchestration, program synthesis, and UI control advancements. GLM-4.6 update from Zhipu features a large Mixture-of-Experts model with 355B parameters.
  • Oct 06
    OpenAI Dev Day: Apps SDK, AgentKit, Codex GA, GPT‑5 Pro and Sora 2 APIs
    gpt-5-pro gpt-realtime-mini-2025-10-06 gpt-audio-mini-2025-10-06 gpt-image-1-mini sora-2 sora-2-pro openai canva figma zillow coursera api model-release fine-tuning agentic-ai code-generation model-deployment pricing prompt-optimization software-development multimodality sama edwinarbus gdb dbreunig stevenheidel
    OpenAI showcased major product launches at their DevDay including the Apps SDK, AgentKit, and Codex now generally available with SDK and enterprise features. They introduced new models such as gpt-5-pro, gpt-realtime-mini-2025-10-06, gpt-audio-mini-2025-10-06, gpt-image-1-mini, and sora-2 with a pro variant. The Apps SDK enables embedding interactive apps inside ChatGPT with partners like Canva, Figma, Zillow, and Coursera. AgentKit offers a full stack for building and deploying production agents with tools like ChatKit and Guardrails. Codex supports speech and controller-driven coding, credited with high internal shipping velocity. Pricing for GPT-5 Pro was revealed at $15 input and $120 output per million tokens. "OpenAI turned ChatGPT into an application platform" and "AgentKit built a working agent in under 8 minutes" were highlights.
  • Oct 03
    not much happened today
    claude-3-sonnet claude-3-opus gpt-5-codex grok-4-fast qwen-3-next gemini-2.5-pro sora-2-pro ray-3 kling-2.5 veo-3 modernvbert anthropic x-ai google google-labs openai arena epoch-ai mit luma akhaliq coding-agents cybersecurity api model-taxonomy model-ranking video-generation benchmarking multi-modal-generation retrieval image-text-retrieval finbarrtimbers gauravisnotme justinlin610 billpeeb apples_jimmy akhaliq
    Anthropic announces a new CTO. Frontier coding agents see updates with Claude Sonnet 4.5 showing strong cybersecurity and polished UX but trailing GPT-5 Codex in coding capability. xAI Grok Code Fast claims higher edit success at lower cost. Google's Jules coding agent launches a programmable API with CI/CD integration. Qwen clarifies its model taxonomy and API tiers. Vision/LM Arena rankings show a tight competition among Claude Sonnet 4.5, Claude Opus 4.1, Gemini 2.5 Pro, and OpenAI's latest models. In video generation, Sora 2 Pro leads App Store rankings with rapid iteration and a new creator ecosystem; early tests show it answers GPQA-style questions at 55% accuracy versus GPT-5's 72%. Video Arena adds new models like Luma's Ray 3 and Kling 2.5 for benchmarking. Multi-modal video+audio generation model Ovi (Veo-3-like) is released. Retrieval models include ModernVBERT from MIT with efficient image-text retrieval capabilities. "Claude Sonnet 4.5 is basically the same as Opus 4.1 for coding" and "Jules is a programmable team member" highlight key insights.
  • Oct 02
    not much happened today
    kling-2.5-turbo sora-2 gemini-2.5-flash granite-4.0 qwen-3 qwen-image-2509 qwen3-vl-235b openai google ibm alibaba kling_ai synthesia ollama huggingface arena artificialanalysis tinker scaling01 video-generation instruction-following physics-simulation image-generation model-architecture mixture-of-experts context-windows token-efficiency fine-tuning lora cpu-training model-benchmarking api workflow-automation artificialanlys kling_ai altryne teortaxestex fofrai tim_dettmers sundarpichai officiallogank andrew_n_carr googleaidevs clementdelangue wzhao_nlp alibaba_qwen scaling01 ollama
    Kling 2.5 Turbo leads in text-to-video and image-to-video generation with competitive pricing. OpenAI Sora 2 shows strong instruction-following but has physics inconsistencies. Google Gemini 2.5 Flash "Nano Banana" image generation is now generally available with multi-image blending and flexible aspect ratios. IBM Granite 4.0 introduces a hybrid Mamba/Transformer architecture with large context windows and strong token efficiency, outperforming some peers on the Intelligence Index. Qwen models receive updates including fine-tuning API support and improved vision capabilities. Tinker offers a flexible fine-tuning API supporting LoRA sharing and CPU-only training loops. The ecosystem also sees updates like Synthesia 3.0 adding video agents.
  • Oct 01
    Thinking Machines' Tinker: LoRA based LLM fine-tuning API
    qwen-235b-a22b sora-2 thinking-machines openai fine-tuning lora model-training api model-optimization distributed-training post-training-methods research-productivity video-generation content-moderation engagement-patterns karpathy lilianweng sama
    Thinking Machines recently raised $2 billion without shipping a product until now, launching their first product Tinker, a managed service API for fine-tuning large and mixture-of-experts models like Qwen-235B-A22B using LoRA for cost-efficient training. The Tinker API offers low-level primitives for post-training methods and is supported by an open-source Tinker Cookbook library. Influential AI figures like Andrej Karpathy and Lilian Weng praised its design for reducing complexity and boosting research productivity. Meanwhile, OpenAI launched Sora 2, a video+audio model integrated into their consumer social app, sparking viral engagement and concerns over misuse and content moderation. Sam Altman emphasized the product's dual focus on delight and revenue alongside AGI research.
  • Sep 30
    Sora 2: new video+audio model and OpenAI's first Social Network
    sora-2 claude-4.5-sonnet gpt-5-high openai anthropic video-generation character-consistency social-networks agentic-ai token-efficiency benchmarking model-performance context-management coding math sama
    Sora 2 released with improvements on physical world video modeling and a new "character consistency" feature allowing real-world element injection from a single video. The model powers a new Sora social network app with profiles, DMs, and viral videos, emphasizing user control over likeness use. OpenAI employees are actively experimenting with the model. Meanwhile, Anthropic launched Claude 4.5 Sonnet with enhanced intelligence, token efficiency, and agentic tool use, outperforming some competitors and closely tracking GPT-5-high on benchmarks. Ecosystem support includes LangSmith integration and strong coding/math benchmark results.
  • Sep 29
    Anthropic Claude Sonnet 4.5, Claude Code 2.0, new VS Code Extensions
    claude-sonnet-4.5 claude-code-v2 deepseek-v3.2-exp anthropic deepseek openai stripe swe-bench finance law stem code-execution context-editing memory-management api chrome-extension generative-ui sparse-attention long-context cost-efficiency john_schulman mike_krieger
    Anthropic launched a major update with Claude Sonnet 4.5, achieving 77.2% SWE-Bench verified performance and improvements in finance, law, and STEM. They also released Claude Code v2 featuring checkpoints, a refreshed terminal, and a native VS Code extension, plus a new mascot Clawd. The Claude API gained context editing and memory tools, and the Claude Agent SDK was introduced. The Claude.ai apps now support code execution and file creation, with a Chrome extension available for Max users. Additionally, Imagine with Claude offers a generative UI research preview. Reception has been positive from developers and third-party evaluators. Meanwhile, DeepSeek released V3.2-Exp with a new Sparse Attention algorithm, significantly reducing long-context costs and cutting API prices by over 50%, while maintaining quality.
  • Sep 26
    not much happened today
    gemini-robotics-1.5 gemini-live embeddinggemma veo-3 gemini-2.5-flash code-world-model-32b qwen3-coder-30b vllm-v1 mlx-lm flashattention-4 google meta-ai-fair perplexity-ai baseten spatial-reasoning temporal-reasoning agentic-ai code-semantics code-execution-traces coding-infrastructure runtime-optimization batch-inference embedding-latency api model-optimization model-performance osanseviero _anniexie rmstein scaling01 giffmana cline redhat_ai awnihannun charles_irl bernhardsson akshat_b aravsrinivas
    Google released a dense September update including Gemini Robotics 1.5 with enhanced spatial/temporal reasoning, Gemini Live, EmbeddingGemma, and Veo 3 GA powering creative workflows. They also introduced agentic features like restaurant-reservation agents and reduced pricing for Gemini 2.5 Flash. Meta AI unveiled the open-weight Code World Model (CWM) 32B, excelling in code semantics and math benchmarks, with innovations in training code models via execution traces. Local-first coding setups highlight Qwen3-Coder-30B running efficiently on consumer GPUs, paired with tools like Cline and LM Studio. Runtime improvements include vLLM v1 supporting hybrid models and mlx-lm adding batch inference on Apple silicon. In infrastructure, FlashAttention 4 was reverse-engineered revealing a ~20% speedup from architectural optimizations. Perplexity AI advances its independent web index and browsing API with upcoming feed refreshes. Embedding latency improvements were achieved by Superhuman using Baseten.
  • Sep 25
    GDPVal finding: Claude Opus 4.1 within 95% of AGI (human experts in top 44 white collar jobs)
    claude-4.1-opus gpt-5-high gptnext gemini-2.5-flash gemini-2.5-flash-lite deepseek-v3.1-terminus google-chirp-2 qwen-2.5b openai anthropic google nvidia artificial-analysis deepseek benchmarking agentic-ai tool-use long-context speech-to-text model-evaluation reasoning pricing model-performance kevinweil gdb dejavucoder yuchenj_uw lhsummers
    OpenAI's Evals team released GDPval, a comprehensive evaluation benchmark covering 1,320 tasks across 44 predominantly digital occupations, assessing AI models against human experts with 14 years average experience. Early results show Claude 4.1 Opus outperforming human experts in most categories and GPT-5 high trailing behind, with projections that GPTnext could match human performance by mid-2026. The benchmark is positioned as a key metric for policymakers and labor impact forecasting. Additionally, Artificial Analysis reported improvements in Gemini 2.5 Flash/Flash-Lite and DeepSeek V3.1 Terminus models, alongside new speech-to-text benchmarks (AA-WER) highlighting leaders like Google Chirp 2 and NVIDIA Canary Qwen2.5B. Agentic AI advances include Kimi OK Computer, an OS-like agent with extended tool capabilities and new vendor verification tools.
  • Sep 24
    not much happened today
    qwen3-max qwen3-vl qwen3-coder-plus gpt-5-codex code-world-model-32b claude-sonnet-4 claude-opus-4.1 alibaba openai meta-ai-fair huggingface anthropic microsoft github context-windows code-generation model-releases model-benchmarking api model-optimization multimodality software-engineering model-training huybery akhaliq lmarena_ai gdb ylecun pierceboggan julesagent
    Alibaba unveiled the Qwen3 model family including Qwen3-Max and Qwen3-VL with a native 256K context window expandable to 1M, strong OCR in 32 languages, and rapid release velocity (~3.5 releases/month) backed by a $52B infrastructure roadmap. OpenAI launched GPT-5 Codex, an agent-optimized coding model with up to 400K context and adaptive reasoning priced at $1.25/$10 per million tokens, integrated into Cline and benchmarked in WebDev arenas. Meta AI FAIR released the open-weight Code World Model (CWM) 32B, a dense code generation model with strong benchmark scores (e.g., 65.8% SWE-bench Verified, 96.6% Math-500) and public safety reports. Ecosystem updates include GitHub Copilot's new embedding model for faster code search and Anthropic's Claude Sonnet 4 and Opus 4.1 integration into Microsoft 365 Copilot. The vLLM 0.10.2 update introduces Decode Context Parallel (DCP) for improved system performance.
  • Sep 23
    Alibaba Yunqi: 7 models released in 4 days (Qwen3-Max, Qwen3-Omni, Qwen3-VL) and $52B roadmap
    qwen3-max qwen3-omni qwen3-vl qwen3guard qwen3-livetranslate qwen3-tts-flash qwen-image-edit qwen3coder qwen alibaba alicloud tool-use large-model-coding reasoning multimodality model-release model-updates industry-application scaling fine-tuning reinforcement-learning junyang_lin eddie_wu alibaba_wan
    Alibaba's Tongyi Qianwen (Qwen) team launched major updates including the 1T parameter Qwen3-Max, Qwen3-Omni, and Qwen3-VL models, alongside specialized versions like Qwen3Guard, Qwen3-LiveTranslate, Qwen3-TTS-Flash, Qwen-Image-Edit, and Qwen3Coder. At the AliCloud Yunqi (Apsara) conference, CEO Eddie Wu outlined a $52B roadmap emphasizing two AI development stages: "intelligence emergence" focusing on learning from humans and reasoning, and "autonomous action" highlighting AI's tool use and real-world task execution. The updates showcase advances in tool use, large-model coding capabilities, and AI's expanding role across industries such as logistics, manufacturing, biomedicine, and finance. Junyang Lin and Alibaba Wan are key spokespersons for these developments. The Qwen project is now seen as a "frontier lab" for AI innovation.
  • Sep 22
    NVIDIA to invest $100B in OpenAI for 10GW of Vera Rubin rollout
    qwen3-omni deepseek-v3.1 nvidia openai oracle intel enfabrica wayne gpu-infrastructure deterministic-inference reinforcement-learning fp8-precision gpu-performance ai-infrastructure strategic-partnerships investment datacenters cuda-graphs pipeline-parallelism data-parallelism artificialanlys gdb
    NVIDIA and OpenAI announced a landmark strategic partnership to deploy at least 10 gigawatts of AI datacenters using NVIDIA's systems, with NVIDIA investing up to $100 billion progressively as each gigawatt is deployed, starting in the second half of 2026 on the Vera Rubin platform. This deal significantly impacts the AI infrastructure funding landscape, potentially supporting OpenAI's $300 billion commitment to Oracle. The announcement caused major stock market reactions, with NVIDIA's market cap surging by $170 billion. Additionally, advancements in deterministic inference for reinforcement learning and FP8 precision gains in GPU performance were highlighted by AI practitioners.
See all issues

Let's Connect

If you want to get in touch with me about something or just to say hi, reach out on social media or send me an email.

  • GitHub /
  • X (@smol_ai) /
  • swyx at smol dot ai
© 2025 • AINews
You can also subscribe by rss .
Press Esc or click anywhere to close