All tags
Person: "jon_durbin"
Cohere Command A Reasoning beats GPT-OSS-120B and DeepSeek R1 0528
command-a-reasoning deepseek-v3.1 cohere deepseek intel huggingface baseten vllm-project chutes-ai anycoder agentic-ai hybrid-models long-context fp8-training mixture-of-experts benchmarking quantization reasoning coding-workflows model-pricing artificialanlys reach_vb scaling01 cline ben_burtenshaw haihaoshen jon_durbin _akhaliq willccbb teortaxestex
Cohere's Command A Reasoning model outperforms GPT-OSS in open deep research capabilities, emphasizing agentic use cases for 2025. DeepSeek-V3.1 introduces a hybrid reasoning architecture toggling between reasoning and non-reasoning modes, optimized for agentic workflows and coding, with extensive long-context pretraining (~630B tokens for 32k context, ~209B for 128k), FP8 training, and a large MoE expert count (~37B). Benchmarks show competitive performance with notable improvements in SWE-Bench and other reasoning tasks. The model supports a $0.56/M input and $1.68/M output pricing on the DeepSeek API and enjoys rapid ecosystem integration including HF weights, INT4 quantization by Intel, and vLLM reasoning toggles. Community feedback highlights the hybrid design's pragmatic approach to agent and software engineering workflows, though some note the lack of tool use in reasoning mode.
OpenAI's IMO Gold model also wins IOI Gold
gpt-5 gpt-5-thinking gpt-5-mini gemini-2.5-pro claude opus-4.1 openai google-deepmind anthropic reinforcement-learning benchmarking model-performance prompt-engineering model-behavior competitive-programming user-experience model-naming model-selection hallucination-detection sama scaling01 yanndubs sherylhsu ahmed_el-kishky jerry_tworek noam_brown alex_wei amandaaskell ericmitchellai jon_durbin gdb jerryjliu0
OpenAI announced placing #6 among human coders at the IOI, reflecting rapid progress in competitive coding AI over the past two years. The GPT-5 launch faced significant user backlash over restrictive usage limits and removal of model selection control, leading to a reversal and increased limits to 3000 requests per week for Plus users. Confusion around GPT-5 naming and benchmarking was highlighted, with critiques on methodological issues comparing models like Claude and Gemini. Performance reviews of GPT-5 are mixed, with claims of near-zero hallucinations by OpenAI staff but user reports of confidence in hallucinations and steering difficulties. Benchmarks show GPT-5 mini performing well on document understanding, while the full GPT-5 is seen as expensive and middling. On the Chatbot Arena, Gemini 2.5 Pro holds a 67% winrate against GPT-5 Thinking. Prompting and model behavior remain key discussion points.