All tags
Model: "opus-4.1"
not much happened today
gpt-5 gpt-oss-120b opus-4.1 sonnet-4 openai anthropic minimax context-windows model-routing model-hosting multi-tool-pipelines prompt-caching model-extraction model-pairing cost-efficiency model-optimization sama jeremyphoward jxmnop _catwu
OpenAI continues small updates to GPT-5, introducing "Auto/Fast/Thinking" modes with 196k token context, 3,000 messages/week, and dynamic routing to cheaper models for cost efficiency. The MiniMax AI Agent Challenge offers $150,000 in prizes for AI agent development by August 25. The community discusses GPT-OSS-120B base model extraction, hosting, and tooling improvements, including multi-tool pipelines and flex-attention. Anthropic announces model pairing in Claude Code with Opus 4.1 for planning and Sonnet 4 for execution, expanding context to 1M tokens and introducing prompt caching. Key figures include @sama, @jeremyphoward, @jxmnop, and @_catwu.
OpenAI's IMO Gold model also wins IOI Gold
gpt-5 gpt-5-thinking gpt-5-mini gemini-2.5-pro claude opus-4.1 openai google-deepmind anthropic reinforcement-learning benchmarking model-performance prompt-engineering model-behavior competitive-programming user-experience model-naming model-selection hallucination-detection sama scaling01 yanndubs sherylhsu ahmed_el-kishky jerry_tworek noam_brown alex_wei amandaaskell ericmitchellai jon_durbin gdb jerryjliu0
OpenAI announced placing #6 among human coders at the IOI, reflecting rapid progress in competitive coding AI over the past two years. The GPT-5 launch faced significant user backlash over restrictive usage limits and removal of model selection control, leading to a reversal and increased limits to 3000 requests per week for Plus users. Confusion around GPT-5 naming and benchmarking was highlighted, with critiques on methodological issues comparing models like Claude and Gemini. Performance reviews of GPT-5 are mixed, with claims of near-zero hallucinations by OpenAI staff but user reports of confidence in hallucinations and steering difficulties. Benchmarks show GPT-5 mini performing well on document understanding, while the full GPT-5 is seen as expensive and middling. On the Chatbot Arena, Gemini 2.5 Pro holds a 67% winrate against GPT-5 Thinking. Prompting and model behavior remain key discussion points.