All tags
Person: "ericmitchellai"
not much happened today
grok-2 grok-2.5 vibevoice-1.5b motif-2.6b gpt-5 qwen-code xai-org microsoft motif-technology alibaba huggingface langchain-ai mixture-of-experts model-scaling model-architecture text-to-speech fine-tuning training-data optimization reinforcement-learning agentic-ai tool-use model-training model-release api software-development model-quantization elonmusk clementdelangue rasbt quanquangu akhaliq eliebakouch gdb ericmitchellai ivanfioravanti deanwball giffmana omarsar0 corbtt
xAI released open weights for Grok-2 and Grok-2.5 with a novel MoE residual architecture and μP scaling, sparking community excitement and licensing concerns. Microsoft open-sourced VibeVoice-1.5B, a multi-speaker long-form TTS model with streaming support and a 7B variant forthcoming. Motif Technology published a detailed report on Motif-2.6B, highlighting Differential Attention, PolyNorm, and extensive finetuning, trained on AMD MI250 GPUs. In coding tools, momentum builds around GPT-5-backed workflows, with developers favoring it over Claude Code. Alibaba released Qwen-Code v0.0.8 with deep VS Code integration and MCP CLI enhancements. The MCP ecosystem advances with LiveMCP-101 stress tests, the universal MCP server "Rube," and LangGraph Platform's rollout of revision queueing and ART integration for RL training of agents.
OpenAI's IMO Gold model also wins IOI Gold
gpt-5 gpt-5-thinking gpt-5-mini gemini-2.5-pro claude opus-4.1 openai google-deepmind anthropic reinforcement-learning benchmarking model-performance prompt-engineering model-behavior competitive-programming user-experience model-naming model-selection hallucination-detection sama scaling01 yanndubs sherylhsu ahmed_el-kishky jerry_tworek noam_brown alex_wei amandaaskell ericmitchellai jon_durbin gdb jerryjliu0
OpenAI announced placing #6 among human coders at the IOI, reflecting rapid progress in competitive coding AI over the past two years. The GPT-5 launch faced significant user backlash over restrictive usage limits and removal of model selection control, leading to a reversal and increased limits to 3000 requests per week for Plus users. Confusion around GPT-5 naming and benchmarking was highlighted, with critiques on methodological issues comparing models like Claude and Gemini. Performance reviews of GPT-5 are mixed, with claims of near-zero hallucinations by OpenAI staff but user reports of confidence in hallucinations and steering difficulties. Benchmarks show GPT-5 mini performing well on document understanding, while the full GPT-5 is seen as expensive and middling. On the Chatbot Arena, Gemini 2.5 Pro holds a 67% winrate against GPT-5 Thinking. Prompting and model behavior remain key discussion points.