All tags
Model: "gpt-oss-20b"
DeepSeek V3.1: 840B token continued pretrain, beating Claude 4 Sonnet at 11% of its cost
deepseek-v3.1 seed-oss-36b computerrl gemini-2.5-pro gpt-5 claude-code gpt-oss-120b gpt-oss-20b deepseek bytedance zhipu-ai github microsoft anthropic together-ai baseten huggingface token-efficiency coding agentic-benchmarks long-context reinforcement-learning developer-tools fine-tuning multinode-training model-release teortaxestex rasbt lukehoban burkeholland _catwu cline winglian
DeepSeek released DeepSeek V3.1, a quietly rolled out open model with an 128K context window and improvements in token efficiency, coding, and agentic benchmarks. ByteDance launched the permissive Seed-OSS 36B model on Hugging Face, noted for long-context and reasoning capabilities. Zhipu AI introduced ComputerRL, a reinforcement learning framework for computer-use agents, achieving strong benchmark results. In developer tooling, GitHub Copilot expanded globally, Microsoft VS Code integrated Gemini 2.5 Pro and updated GPT-5 agent prompts, and Anthropic launched Claude Code seats with spend controls. Open-source fine-tuning advances include Together AI adding SFT for gpt-oss-120B/20B and Baseten enabling multinode 120B training with Truss CLI. The community noted mixed performance and ongoing post-training adjustments for DeepSeek V3.1.
not much happened today
gpt-oss-120b gpt-oss-20b kimi-k2 deepseek-r1 qwen-3-32b openai huggingface microsoft llamaindex ollama baseten fireworksai cerebras groq together anthropic google uk-aisi sliding-window-attention mixture-of-experts rope context-length mxfp4-format synthetic-data reasoning-core-hypothesis red-teaming benchmarking coding-benchmarks model-performance fine-tuning woj_zaremba sama huybery drjimfan jxmnop scaling01 arunv30 kevinweil xikun_zhang_ jerryjliu0 ollama basetenco reach_vb gneubig shxf0072 _lewtun
OpenAI released its first open models since GPT-2, gpt-oss-120b and gpt-oss-20b, which quickly trended on Hugging Face. Microsoft supports these models via Azure AI Foundry and Windows Foundry Local. Key architectural innovations include sliding window attention, mixture of experts (MoE), a RoPE variant, and a 256k context length. The models use a new MXFP4 format supported by llama.cpp. Hypotheses suggest gpt-oss was trained on synthetic data to enhance safety and performance, supporting the Reasoning Core Hypothesis. OpenAI announced a $500K bounty for red teaming with partners including Anthropic, Google, and the UK AISI. Performance critiques highlight inconsistent benchmarking results, with GPT-OSS-120B scoring 41.8% on the Aider Polyglot coding benchmark, trailing competitors like Kimi-K2 and DeepSeek-R1. Some users note the model excels in math and reasoning but lacks common sense and practical utility.
OpenAI's gpt-oss 20B and 120B, Claude Opus 4.1, DeepMind Genie 3
gpt-oss-120b gpt-oss-20b gpt-oss claude-4.1-opus claude-4.1 genie-3 openai anthropic google-deepmind mixture-of-experts model-architecture agentic-ai model-training model-performance reasoning hallucination-detection gpu-optimization open-weight-models realtime-simulation sama rasbt sebastienbubeck polynoamial kaicathyc finbarrtimbers vikhyatk scaling01 teortaxestex
OpenAI released the gpt-oss family, including gpt-oss-120b and gpt-oss-20b, their first open-weight models since GPT-2, designed for agentic tasks and licensed under Apache 2.0. These models use a Mixture-of-Experts (MoE) architecture with wide vs. deep design and innovative features like bias units in attention and a unique swiglu variant. The 120B model was trained with about 2.1 million H100 GPU hours. Meanwhile, Anthropic launched claude-4.1-opus, touted as the best coding model currently. DeepMind showcased genie-3, a realtime world simulation model with minute-long consistency. The releases highlight advances in open-weight models, reasoning capabilities, and world simulation. Key figures like @sama, @rasbt, and @SebastienBubeck provided technical insights and performance evaluations, noting strengths and hallucination risks.