All tags
Model: "kimi-k2"
not much happened today
gpt-oss-120b gpt-oss-20b kimi-k2 deepseek-r1 qwen-3-32b openai huggingface microsoft llamaindex ollama baseten fireworksai cerebras groq together anthropic google uk-aisi sliding-window-attention mixture-of-experts rope context-length mxfp4-format synthetic-data reasoning-core-hypothesis red-teaming benchmarking coding-benchmarks model-performance fine-tuning woj_zaremba sama huybery drjimfan jxmnop scaling01 arunv30 kevinweil xikun_zhang_ jerryjliu0 ollama basetenco reach_vb gneubig shxf0072 _lewtun
OpenAI released its first open models since GPT-2, gpt-oss-120b and gpt-oss-20b, which quickly trended on Hugging Face. Microsoft supports these models via Azure AI Foundry and Windows Foundry Local. Key architectural innovations include sliding window attention, mixture of experts (MoE), a RoPE variant, and a 256k context length. The models use a new MXFP4 format supported by llama.cpp. Hypotheses suggest gpt-oss was trained on synthetic data to enhance safety and performance, supporting the Reasoning Core Hypothesis. OpenAI announced a $500K bounty for red teaming with partners including Anthropic, Google, and the UK AISI. Performance critiques highlight inconsistent benchmarking results, with GPT-OSS-120B scoring 41.8% on the Aider Polyglot coding benchmark, trailing competitors like Kimi-K2 and DeepSeek-R1. Some users note the model excels in math and reasoning but lacks common sense and practical utility.
not much happened today
glm-4.5 glm-4.5-air qwen3-coder qwen3-235b kimi-k2 grok-imagine wan-2.2 smollm3 figure-01 figure-02 vitpose++ chatgpt zhipu-ai alibaba moonshot-ai x-ai figure openai runway mlx ollama deeplearningai model-releases model-performance moe image-generation video-generation pose-estimation robotics training-code-release interactive-learning in-context-learning yuchenj_uw corbtt reach_vb ollama deeplearningai gdb sama c_valenzuelab adcock_brett skalskip92 loubnabenallal1 hojonathanho ostrisai
Chinese AI labs have released powerful open-source models like GLM-4.5 and GLM-4.5-Air from Zhipu AI, Qwen3 Coder and Qwen3-235B from Alibaba, and Kimi K2 from Moonshot AI, highlighting a surge in permissively licensed models. Zhipu AI's GLM-4.5 is a 355B parameter MoE model competitive with Claude 4 Opus and Gemini 2.5 Pro. Alibaba's Qwen3 Coder shows strong code generation performance with a low edit failure rate, while Moonshot AI's Kimi K2 is a 1 trillion-parameter MoE model surpassing benchmarks like LiveCodeBench. In video and image generation, xAI launched Grok Imagine, and Wan2.2 impressed with innovative image-to-video generation. Robotics advances include Figure's Figure-01 and Figure-02 humanoid robots and ViTPose++ for pose estimation in basketball analysis. SmolLM3 training and evaluation code was fully released under Apache 2.0. OpenAI introduced Study Mode in ChatGPT to enhance interactive learning, and Runway rolled out Runway Aleph, a new in-context video model for multi-task visual generation. The community notes a competitive disadvantage for organizations avoiding these Chinese open-source models. "Orgs avoiding these models are at a significant competitive disadvantage," noted by @corbtt.
not much happened today
glm-4.5 glm-4.5-air qwen3-coder qwen3-235b kimi-k2 wan-2.2 grok-imagine smollm3 figure-01 figure-02 vitpose++ zhipu-ai alibaba moonshot-ai x-ai ideogram figure smollm openai model-releases moe model-benchmarking image-generation video-generation pose-estimation robotics training-code-release apache-license yuchenj_uw corbtt cline reach_vb ollama deeplearningai ostrisai hojonathanho adcock_brett skalskip92 loubnabenallal1
Chinese labs have released a wave of powerful, permissively licensed models in July, including Zhipu AI's GLM-4.5 and GLM-4.5-Air, Alibaba's Qwen3 Coder and Qwen3-235B, and Moonshot AI's Kimi K2. These models feature large-scale Mixture of Experts architectures with active parameters ranging from 3B to 32B and context windows up to 256K tokens. Zhipu AI's GLM-4.5 competes with Claude 4 Opus and Gemini 2.5 Pro in benchmarks. Moonshot AI's Kimi K2 is a 1 trillion-parameter MoE model surpassing other open-weight models on LiveCodeBench and AceBench. In video and image generation, xAI launched Grok Imagine, and Wan2.2 impressed with its Image-to-Video approach. Ideogram released a character consistency model. Robotics advances include Figure's Figure-01 and Figure-02 humanoid robots and ViTPose++ for pose estimation in basketball analysis. The SmolLM3 training and evaluation code was fully released under an Apache 2.0 license. "Orgs avoiding these Chinese open-source models are at a significant competitive disadvantage," noted by @corbtt.
GLM-4.5: Deeper, Headier, & better than Kimi/Qwen/DeepSeek (SOTA China LLM?)
glm-4.5-355b-a32b glm-4.5-air-106b-a12b qwen3-coder claude-4-opus grok-4 o3 gpt-4.1 gpt-5 kimi-k2 claude-sonnet-4 z-ai alibaba huggingface openai reinforcement-learning token-efficiency model-optimization open-source-models agentic-ai coding model-training lupantech teortaxestex mervenoyann _lewtun scaling01 cline
Z.ai (Zhipu AI) released the GLM-4.5-355B-A32B and GLM-4.5-Air-106B-A12B open weights models, claiming state-of-the-art performance competitive with Claude 4 Opus, Grok 4, and OpenAI's o3. These models emphasize token efficiency and efficient reinforcement learning training validated by the Muon optimizer. Alibaba Qwen introduced Group Sequence Policy Optimization (GSPO), a new reinforcement learning algorithm powering the Qwen3 model suite, integrated into Hugging Face's TRL library. Speculation surrounds mystery models "summit" and "zenith" as potential GPT-5 variants based on GPT-4.1 architecture. Qwen3-Coder shows strong coding benchmark results, rivaling Claude Sonnet 4 and Kimi K2. The rise of powerful Chinese open-source models like GLM-4.5, Wan-2.2, and Qwen3 Coder contrasts with a slowdown from Western labs such as OpenAI.
not much happened today
qwen3-coder-480b-a35b-instruct kimi-k2 alibaba openrouterai togethercompute vllm_project unslothai white-house code-generation benchmarking model-integration context-windows open-source national-security infrastructure ai-policy fchollet clementdelangue scaling01 aravsrinivas rasbt gregkamradt yuchenj_uw
Alibaba announced the release of Qwen3-Coder-480B-A35B-Instruct, an open agentic code model with 480B parameters and 256K context length, praised for rapid development and strong coding performance. Benchmark claims of 41.8% on ARC-AGI-1 faced skepticism from Fran ois Chollet and others due to reproducibility issues. The model quickly integrated into ecosystems like vLLM, Dynamic GGUFs, and OpenRouterAI. The White House unveiled a new AI Action Plan emphasizing Innovation, Infrastructure, and International Diplomacy, linking AI leadership to national security and prioritizing compute access for the Department of Defense. The plan sparked debate on open vs. closed-source AI, with calls from Clement Delangue to embrace open science to maintain US AI competitiveness.
not much happened today
kimi-k2 qwen3-235b-a22b qwen3-coder-480b-a35b gemini-2.5-flash-lite mistral-7b deepseek-v3 moonshot-ai alibaba google google-deepmind openai hugging-face vllm-project mixture-of-experts agentic-ai model-optimization model-training benchmarking code-generation long-context multimodality math reinforcement-learning model-architecture model-performance open-source alignment demishassabis rasbt alexwei_ yitayml
Moonshot AI released the Kimi K2, a 1-trillion parameter ultra-sparse Mixture-of-Experts (MoE) model with the MuonClip optimizer and a large-scale agentic data pipeline using over 20,000 tools. Shortly after, Alibaba updated its Qwen3 model with the Qwen3-235B-A22B variant, which outperforms Kimi K2 and other top models on benchmarks like GPQA and AIME despite being 4.25x smaller. Alibaba also released Qwen3-Coder-480B-A35B, a MoE model specialized for coding with a 1 million token context window. Google DeepMind launched Gemini 2.5 Flash-Lite, a faster and more cost-efficient model outperforming previous versions in coding, math, and multimodal tasks. The MoE architecture is becoming mainstream, with models like Mistral, DeepSeek, and Kimi K2 leading the trend. In mathematics, an advanced Gemini model achieved a gold medal level score at the International Mathematical Olympiad (IMO), marking a first for AI. An OpenAI researcher noted their IMO model "knew" when it did not have a correct solution, highlighting advances in model reasoning and self-awareness.
not much happened today
kimi-k2 gpt-4.1 voxtral goedel-prover-v2 llama-3 mistral-ai moonshot-ai nous-research google-deepmind openai groq anthropic speech-recognition mixture-of-experts benchmarking dataset-release model-architecture theorem-proving reinforcement-learning asymmetry-of-verification inference-speed model-performance cline _jasonwei
Mistral released Voxtral, claimed as the world's best open speech recognition models, available via API and Hugging Face. Moonshot AI launched Kimi K2, a trillion-parameter Mixture-of-Experts (MoE) model, outperforming GPT-4.1 on benchmarks with 65.4% on SWE-Bench Verified and achieving 200 tokens/second inference speed on Groq hardware. Nous Research open-sourced the Hermes 3 dataset with 1 million samples, aiding SOTA models on the Llama-3 series. Google DeepMind introduced the Mixture-of-Recursions (MoR) architecture promising 2x inference speed and 50% parameter reduction but faced skepticism. Goedel-Prover V2 topped the PutnamBench theorem proving benchmark. AtCoder World Finals saw a human winner with OpenAI placing second. Research highlights include Jason Wei's insights on reinforcement learning and the "Verifier's Law" emphasizing the asymmetry of verification in AI training.
Voxtral - Mistral's SOTA ASR model in 3B (mini) and 24B ("small") sizes beats OpenAI Whisper large-v3
voxtal-3b voxtal-24b kimi-k2 mistral-ai moonshot-ai groq together-ai deepinfra huggingface langchain transcription long-context function-calling multilingual-models mixture-of-experts inference-speed developer-tools model-integration jeremyphoward teortaxestex scaling01 zacharynado jonathanross321 reach_vb philschmid
Mistral surprises with the release of Voxtral, a transcription model outperforming Whisper large-v3, GPT-4o mini Transcribe, and Gemini 2.5 Flash. Voxtral models (3B and 24B) support 32k token context length, handle audios up to 30-40 minutes, offer built-in Q&A and summarization, are multilingual, and enable function-calling from voice commands, powered by the Mistral Small 3.1 language model backbone. Meanwhile, Moonshot AI's Kimi K2, a non-reasoning Mixture of Experts (MoE) model built by a team of around 200 people, gains attention for blazing-fast inference on Groq hardware, broad platform availability including Together AI and DeepInfra, and local running on M4 Max 128GB Mac. Developer tool integrations include LangChain and Hugging Face support, highlighting Kimi K2's strong tool use capabilities.
not much happened today
kimi-k2 grok-4 gpt-5 gemini-2.5 gemini-embedding cognition windsurf moonshot-ai x-ai openai google stanfordnlp huggingface mixture-of-experts model-training model-performance fine-tuning benchmarking agentic-ai model-bugs embedding-models sama hardmaru jeremyphoward akhaliq teortaxestex yuchenj_uw demishassabis
Cognition is acquiring the remaining assets of Windsurf after a significant weekend deal. Moonshot AI released Kimi K2, an open-source, MIT-licensed agentic model with 1 Trillion total / 32B active parameters using a Mixture-of-Experts architecture, trained on 15.5 Trillion tokens with the MuonClip optimizer, showing top performance on benchmarks like EQ-Bench and Creative Writing. xAI launched Grok-4, ranking 5th on IQ Bench but with notable quirks including a bug causing it to respond only with "Heavy" and a high frequency of Elon Musk mentions. Rumors about OpenAI delaying an open-source model release surfaced, with speculation about CEO sama's PR strategy and a possible GPT-5 launch in September. The Gemini 2.5 paper was released with 3,295 authors, and Google introduced its Gemini Embedding model, topping the MTEB leaderboard.
Kimi K2 - SOTA Open MoE proves that Muon can scale to 15T tokens/1T params
kimi-k2 kimi-k2-1t deepseek-v3 grok-4 devstral-2507 gpt-4.1 sonnet-4 moonshot-ai alibaba tencent deepseek x-ai mistral-ai weights-biases hugging-face mixture-of-experts model-training model-optimization optimizer benchmarking long-context model-performance open-weights model-release yuchenj_uw andrew_n_carr scaling01 novita_labs teknium1 aravsrinivas mparakhin simonw
Moonshot AI has released Kimi K2, a 1 trillion parameter Mixture-of-Experts model trained on 15.5 trillion tokens using the new MuonClip optimizer, achieving state-of-the-art results on benchmarks like SWE-Bench Verified (65.8%) and TAU2 (58.4%). This model is competitive with GPT-4.1 and Sonnet 4 on non-thinking tasks and is available under an MIT license. Meanwhile, xAI announced Grok-4, noted for its "LEAST censored frontier model" status and strong long-context performance but criticized for rushed post-training. Mistral AI updated its Devstral 2507 models with improved performance and cost efficiency. The community is excited about the potential of the MuonClip optimizer, which may surpass the long-standing AdamW optimizer in machine learning.