All tags
Model: "claude-sonnet-4"
not much happened today
gpt-5 gpt-5-mini gpt-5-nano claude-sonnet-4 glm-4.5v genie-3 gemini-app qwen-image-distilled matrix-game-2.0 jan-v1 qwen3-4b-thinking openai anthropic zhipu-ai google-deepmind alibaba skywork jan-ai context-window multimodality reinforcement-learning agentic-tasks video-generation image-generation real-time-systems web-search model-accuracy developer-tools open-source-models long-context model-scaling
OpenAI released the GPT-5 series including GPT-5-mini and GPT-5-nano, with mixed user feedback on performance and API behavior. Anthropic extended Claude Sonnet 4 context window to 1 million tokens, a 5x increase, enhancing large document processing. Zhipu AI launched the open-source multimodal GLM-4.5V model with improvements in RL scaling and agentic tasks. Google DeepMind showcased the video generation model Genie 3 and updated the Gemini App with new features like Deep Think and Gemini Live. Alibaba Qwen released the distilled image model Qwen-Image distilled and enhanced their Deep Research capabilities. Open source models like Skywork's Matrix-Game 2.0 and Jan.ai's Jan-v1 (built on Qwen3-4B-Thinking) were introduced, focusing on real-time world modeling and web search respectively. Developer tools such as Claude Code and Cursor were also highlighted.
GLM-4.5: Deeper, Headier, & better than Kimi/Qwen/DeepSeek (SOTA China LLM?)
glm-4.5-355b-a32b glm-4.5-air-106b-a12b qwen3-coder claude-4-opus grok-4 o3 gpt-4.1 gpt-5 kimi-k2 claude-sonnet-4 z-ai alibaba huggingface openai reinforcement-learning token-efficiency model-optimization open-source-models agentic-ai coding model-training lupantech teortaxestex mervenoyann _lewtun scaling01 cline
Z.ai (Zhipu AI) released the GLM-4.5-355B-A32B and GLM-4.5-Air-106B-A12B open weights models, claiming state-of-the-art performance competitive with Claude 4 Opus, Grok 4, and OpenAI's o3. These models emphasize token efficiency and efficient reinforcement learning training validated by the Muon optimizer. Alibaba Qwen introduced Group Sequence Policy Optimization (GSPO), a new reinforcement learning algorithm powering the Qwen3 model suite, integrated into Hugging Face's TRL library. Speculation surrounds mystery models "summit" and "zenith" as potential GPT-5 variants based on GPT-4.1 architecture. Qwen3-Coder shows strong coding benchmark results, rivaling Claude Sonnet 4 and Kimi K2. The rise of powerful Chinese open-source models like GLM-4.5, Wan-2.2, and Qwen3 Coder contrasts with a slowdown from Western labs such as OpenAI.
not much happened today
deepseek-r1-0528 pali-gemma-2 gemma-3 shieldgemma-2 txgemma gemma-3-qat gemma-3n-preview medgemma dolphingemma signgemma claude-4 opus-4 claude-sonnet-4 codestral-embed bagel qwen nemotron-cortexa gemini-2.5-pro deepseek-ai huggingface gemma claude bytedance qwen nemotron sakana-ai-labs benchmarking model-releases multimodality code-generation model-performance long-context reinforcement-learning model-optimization open-source yuchenj_uw _akhaliq clementdelangue osanseviero alexalbert__ guillaumelample theturingpost lmarena_ai epochairesearch scaling01 nrehiew_ ctnzr
DeepSeek R1 v2 model released with availability on Hugging Face and inference partners. The Gemma model family continues prolific development including PaliGemma 2, Gemma 3, and others. Claude 4 and its variants like Opus 4 and Claude Sonnet 4 show top benchmark performance, including new SOTA on ARC-AGI-2 and WebDev Arena. Codestral Embed introduces a 3072-dimensional code embedder. BAGEL, an open-source multimodal model by ByteDance, supports reading, reasoning, drawing, and editing with long mixed contexts. Benchmarking highlights include Nemotron-CORTEXA topping SWEBench and Gemini 2.5 Pro performing on VideoGameBench. Discussions on random rewards effectiveness focus on Qwen models. "Opus 4 NEW SOTA ON ARC-AGI-2. It's happening - I was right" and "Claude 4 launch has dev moving at a different pace" reflect excitement in the community.