All tags
Person: "jerry_tworek"
OpenAI's IMO Gold model also wins IOI Gold
gpt-5 gpt-5-thinking gpt-5-mini gemini-2.5-pro claude opus-4.1 openai google-deepmind anthropic reinforcement-learning benchmarking model-performance prompt-engineering model-behavior competitive-programming user-experience model-naming model-selection hallucination-detection sama scaling01 yanndubs sherylhsu ahmed_el-kishky jerry_tworek noam_brown alex_wei amandaaskell ericmitchellai jon_durbin gdb jerryjliu0
OpenAI announced placing #6 among human coders at the IOI, reflecting rapid progress in competitive coding AI over the past two years. The GPT-5 launch faced significant user backlash over restrictive usage limits and removal of model selection control, leading to a reversal and increased limits to 3000 requests per week for Plus users. Confusion around GPT-5 naming and benchmarking was highlighted, with critiques on methodological issues comparing models like Claude and Gemini. Performance reviews of GPT-5 are mixed, with claims of near-zero hallucinations by OpenAI staff but user reports of confidence in hallucinations and steering difficulties. Benchmarks show GPT-5 mini performing well on document understanding, while the full GPT-5 is seen as expensive and middling. On the Chatbot Arena, Gemini 2.5 Pro holds a 67% winrate against GPT-5 Thinking. Prompting and model behavior remain key discussion points.
OAI and GDM announce IMO Gold-level results with natural language reasoning, no specialized training or tools, under human time limits
gemini-1.5-pro o1 openai google-deepmind reinforcement-learning reasoning model-scaling fine-tuning model-training benchmarking natural-language-processing terence_tao oriol_vinyals alexander_wei jerry_tworek paul_christiano eliezer_yudkowsky
OpenAI and Google DeepMind achieved a major milestone by solving 5 out of 6 problems at the International Mathematical Olympiad (IMO) 2025 within the human time limit of 4.5 hours, earning the IMO Gold medal. This breakthrough was accomplished using general-purpose reinforcement learning and pure in-weights reasoning without specialized tools or internet access, surpassing previous systems like AlphaProof and AlphaGeometry2. The success resolved a 3-year-old AI bet on AI's capability to solve IMO problems and sparked discussions among mathematicians including Terence Tao. Despite this, 26 human competitors remain better than AI on the hardest combinatorics problem (P6). The achievement highlights advances in reinforcement-learning, reasoning, and model-scaling in AI research.