All tags
Model: "grok-4"
not much happened today
gpt-5 gpt-4o grok-4 claude-4-sonnet openai microsoft reasoning latency model-routing benchmarking reinforcement-learning hallucination-control creative-writing priority-processing api-traffic model-deprecation user-experience model-selection voice-mode documentation sama nickaturley elaineyale6 scaling01 mustafasuleyman kevinweil omarsar0 jeremyphoward juberti epochairesearch lechmazur gdb
OpenAI launched GPT-5 with a unified user experience removing manual model selection, causing initial routing and access issues for Plus users that are being addressed with fixes including restored model options and increased usage limits. GPT-5 introduces "Priority Processing" for lower latency at higher price tiers, achieving ~750ms median time-to-first-token in some cases. Microsoft reports full Copilot adoption of GPT-5, and API traffic doubled within 24 hours, peaking at 2 billion tokens per minute. Early benchmarks show GPT-5 leading in reasoning tasks like FrontierMath and LiveBench, with improvements in hallucination control and creative writing, though some models like Grok-4 and Claude-4 Sonnet Thinking outperform it in specific RL-heavy reasoning benchmarks. OpenAI also released extensive migration and feature guides but faced some rollout issues including a broken code sample and a problematic Voice Mode launch. "Unified GPT-5" ends model pickers, pushing developers away from manual model selection.
GLM-4.5: Deeper, Headier, & better than Kimi/Qwen/DeepSeek (SOTA China LLM?)
glm-4.5-355b-a32b glm-4.5-air-106b-a12b qwen3-coder claude-4-opus grok-4 o3 gpt-4.1 gpt-5 kimi-k2 claude-sonnet-4 z-ai alibaba huggingface openai reinforcement-learning token-efficiency model-optimization open-source-models agentic-ai coding model-training lupantech teortaxestex mervenoyann _lewtun scaling01 cline
Z.ai (Zhipu AI) released the GLM-4.5-355B-A32B and GLM-4.5-Air-106B-A12B open weights models, claiming state-of-the-art performance competitive with Claude 4 Opus, Grok 4, and OpenAI's o3. These models emphasize token efficiency and efficient reinforcement learning training validated by the Muon optimizer. Alibaba Qwen introduced Group Sequence Policy Optimization (GSPO), a new reinforcement learning algorithm powering the Qwen3 model suite, integrated into Hugging Face's TRL library. Speculation surrounds mystery models "summit" and "zenith" as potential GPT-5 variants based on GPT-4.1 architecture. Qwen3-Coder shows strong coding benchmark results, rivaling Claude Sonnet 4 and Kimi K2. The rise of powerful Chinese open-source models like GLM-4.5, Wan-2.2, and Qwen3 Coder contrasts with a slowdown from Western labs such as OpenAI.
not much happened today
gpt-5 gpt4-0314 qwen3-235b-thinking runway-aleph imagen-4-ultra smollm3 grok-4 openai alibaba runway hugging-face google anthropic pytorch lmarena reinforcement-learning reasoning video-generation image-generation model-optimization open-source model-performance inference-speed integration stability sama clementdelangue xikun_zhang_ teknnium1 chujiezheng
OpenAI has fully rolled out its ChatGPT agent to all Plus, Pro, and Team users and is building hype for the upcoming GPT-5, which reportedly outperforms Grok-4 and can build a cookie clicker game in two minutes. Alibaba's Qwen team released the open-source reasoning model Qwen3-235B-Thinking, achieving an 89% win rate over gpt4-0314 using a new RL algorithm called Group Sequence Policy Optimization (GSPO). Runway introduced Runway Aleph, a state-of-the-art in-context video model for editing and generating video content. Hugging Face highlights the growing momentum of open-source AI, especially from Chinese teams. Other updates include Kling's upgrades for image-to-video generation and Google's Imagen 4 Ultra being recognized as a top text-to-image model. Anthropic integrated Claude with Canva for branded visual designs but faces stability issues. The PyTorch team released optimized checkpoints for SmolLM3 to speed up inference.
not much happened today
kimi-k2 grok-4 gpt-5 gemini-2.5 gemini-embedding cognition windsurf moonshot-ai x-ai openai google stanfordnlp huggingface mixture-of-experts model-training model-performance fine-tuning benchmarking agentic-ai model-bugs embedding-models sama hardmaru jeremyphoward akhaliq teortaxestex yuchenj_uw demishassabis
Cognition is acquiring the remaining assets of Windsurf after a significant weekend deal. Moonshot AI released Kimi K2, an open-source, MIT-licensed agentic model with 1 Trillion total / 32B active parameters using a Mixture-of-Experts architecture, trained on 15.5 Trillion tokens with the MuonClip optimizer, showing top performance on benchmarks like EQ-Bench and Creative Writing. xAI launched Grok-4, ranking 5th on IQ Bench but with notable quirks including a bug causing it to respond only with "Heavy" and a high frequency of Elon Musk mentions. Rumors about OpenAI delaying an open-source model release surfaced, with speculation about CEO sama's PR strategy and a possible GPT-5 launch in September. The Gemini 2.5 paper was released with 3,295 authors, and Google introduced its Gemini Embedding model, topping the MTEB leaderboard.
Kimi K2 - SOTA Open MoE proves that Muon can scale to 15T tokens/1T params
kimi-k2 kimi-k2-1t deepseek-v3 grok-4 devstral-2507 gpt-4.1 sonnet-4 moonshot-ai alibaba tencent deepseek x-ai mistral-ai weights-biases hugging-face mixture-of-experts model-training model-optimization optimizer benchmarking long-context model-performance open-weights model-release yuchenj_uw andrew_n_carr scaling01 novita_labs teknium1 aravsrinivas mparakhin simonw
Moonshot AI has released Kimi K2, a 1 trillion parameter Mixture-of-Experts model trained on 15.5 trillion tokens using the new MuonClip optimizer, achieving state-of-the-art results on benchmarks like SWE-Bench Verified (65.8%) and TAU2 (58.4%). This model is competitive with GPT-4.1 and Sonnet 4 on non-thinking tasks and is available under an MIT license. Meanwhile, xAI announced Grok-4, noted for its "LEAST censored frontier model" status and strong long-context performance but criticized for rushed post-training. Mistral AI updated its Devstral 2507 models with improved performance and cost efficiency. The community is excited about the potential of the MuonClip optimizer, which may surpass the long-standing AdamW optimizer in machine learning.
Grok 4: xAI succeeds in going from 0 to new SOTA LLM in 2 years
grok-4 grok-4-heavy claude-4-opus xai perplexity-ai langchain cursor cline model-releases benchmarking long-context model-pricing model-integration voice performance scaling gpu-optimization elonmusk aravsrinivas igor_babuschkin yuchenj_uw
xAI launched Grok 4 and Grok 4 Heavy, large language models rumored to have 2.4 trillion parameters and trained with 100x more compute than Grok 2 on 100k H100 GPUs. Grok 4 achieved new state-of-the-art results on benchmarks like ARC-AGI-2 (15.9%), HLE (50.7%), and Vending-Bench, outperforming models such as Claude 4 Opus. The model supports a 256K context window and is priced at $3.00/M input tokens and $15.00/M output tokens. It is integrated into platforms like Cursor, Cline, LangChain, and Perplexity Pro/Max. The launch was accompanied by a controversial voice mode and sparked industry discussion about xAI's rapid development pace, with endorsements from figures like Elon Musk and Arav Srinivas.
not much happened today
grok-4 smollm3 t5gemma claude-3.7-sonnet deepseek-r1 langchain openai google-deepmind perplexity xai microsoft huggingface anthropic agentic-ai model-controversy open-source model-release alignment fine-tuning long-context multimodality model-research aravsrinivas clementdelangue _akhaliq
LangChain is nearing unicorn status, while OpenAI and Google DeepMind's Gemini 3 Pro models are launching soon. Perplexity rolls out its agentic browser Comet to waitlists, offering multitasking and voice command features. xAI's Grok-4 update sparked controversy due to offensive outputs, drawing comparisons to Microsoft's Tay bot and resulting in regional blocks. Hugging Face released SmolLM3, a 3B parameter open-source model with state-of-the-art reasoning and long context capabilities. Google introduced T5Gemma encoder-decoder models, a significant update in this model category. Anthropic investigates "alignment faking" in language models, focusing on safety concerns with models like Claude 3.7 Sonnet and DeepSeek-R1. "Grok 3 had high reasoning, Grok 4 has heil reasoning" was a notable user comment on the controversy.
SmolLM3: the SOTA 3B reasoning open source LLM
smollm3-3b olmo-3 grok-4 claude-4 claude-4.1 gemini-nano hunyuan-a13b gemini-2.5 gemma-3n qwen2.5-vl-3b huggingface allenai openai anthropic google-deepmind mistral-ai tencent gemini alibaba open-source small-language-models model-releases model-performance benchmarking multimodality context-windows precision-fp8 api batch-processing model-scaling model-architecture licensing ocr elonmusk mervenoyann skirano amandaaskell clementdelangue loubnabenallal1 awnihannun swyx artificialanlys officiallogank osanseviero cognitivecompai aravsrinivas
HuggingFace released SmolLM3-3B, a fully open-source small reasoning model with open pretraining code and data, marking a high point in open source models until Olmo 3 arrives. Grok 4 was launched with mixed reactions, while concerns about Claude 4 nerfs and an imminent Claude 4.1 surfaced. Gemini Nano is now shipping in Chrome 137+, enabling local LLM access for 3.7 billion users. Tencent introduced Hunyuan-A13B, an 80B parameter model with a 256K context window running on a single H200 GPU. The Gemini API added a batch mode with 50% discounts on 2.5 models. MatFormer Lab launched tools for custom-sized Gemma 3n models. Open source OCR models like Nanonets-OCR-s and ChatDOC/OCRFlux-3B derived from Qwen2.5-VL-3B were highlighted, with licensing discussions involving Alibaba.
not much happened today
grok-4 jamba ernie-4.5 claude-4-sonnet claude-4 kontext-dev ai21-labs hugging-face baidu perplexity-ai deepmind anthropic reinforcement-learning fine-tuning energy-based-transformers ssm-transformer context-windows length-generalization recurrent-neural-networks attention-mechanisms 2-simplicial-attention biomedical-ai instruction-following open-weight-models python-package-management _philschmid corbtt jxmnop sedielem _akhaliq slashml alexiglad clementdelangue _albertgu tri_dao theaitimeline deep-learning-ai
Over the holiday weekend, key AI developments include the upcoming release of Grok 4, Perplexity teasing new projects, and community reactions to Cursor and Dia. Research highlights feature a paper on Reinforcement Learning (RL) improving generalization and reasoning across domains, contrasting with Supervised Fine-Tuning's forgetting issues. Energy-Based Transformers (EBTs) are proposed as a promising alternative to traditional transformers. AI21 Labs updated its Jamba model family with enhanced grounding and instruction following, maintaining a 256K context window. Baidu open-sourced its massive 424 billion parameter Ernie 4.5 model, while Kontext-dev became the top trending model on Hugging Face. Advances in length generalization for recurrent models and the introduction of 2-simplicial attention were noted. In biomedical AI, Biomni, powered by Claude 4 Sonnet, demonstrated superior accuracy and rare disease diagnosis capabilities. Additionally, the Python package manager
uv
received praise for improving Python installation workflows.