All tags
Model: "gemini-3-deep-think"
Gemini 3.1 Pro: 2x 3.0 on ARC-AGI 2
gemini-3.1-pro gemini-3-deep-think google google-deepmind geminiapp reasoning benchmarking agentic-ai cost-efficiency hallucination code-generation model-release developer-tools sundarpichai demishassabis jeffdean koraykv noamshazeer joshwoodward artificialanlys arena oriolvinyalsml scaling01
Google released Gemini 3.1 Pro, a developer preview integrated across the Gemini app, NotebookLM, Gemini API / AI Studio, and Vertex AI, highlighting a significant reasoning improvement with ARC-AGI-2 = 77.1% and strong coding and agentic-tool benchmarks like SWE-Bench Verified = 80.6%. Independent evaluators such as Artificial Analysis and Arena confirmed top-tier performance and cost efficiency, though community reactions included excitement about practical gains, skepticism about benchmark targeting, and concerns over rollout inconsistencies. The release emphasizes the same core intelligence powering Gemini 3 Deep Think scaled for practical use, with notable mentions from leaders like @sundarpichai, @demishassabis, and @JeffDean.
OpenRouter's State of AI - An Empirical 100 Trillion Token Study
grok-code-fast gemini-3 gemini-3-deep-think gpt-5.1-codex-max openrouter deepseek anthropic google google-deepmind reasoning coding tokenization long-context model-architecture benchmarking agentic-ai prompt-engineering quocleix noamshazeer mirrokni
OpenRouter released its first survey showing usage trends with 7 trillion tokens proxied weekly, highlighting a 52% roleplay bias. Deepseek's open model market share has sharply declined due to rising coding model usage. Reasoning model token usage surged from 0% to over 50%. Grok Code Fast shows high usage, while Anthropic leads in tool calling and coding requests with around 60% share. Input tokens quadrupled and output tokens tripled this year, driven mainly by programming use cases, which dominate spending and volume. Google launched Gemini 3 Deep Think, featuring parallel thinking and achieving 45.1% on ARC-AGI-2 benchmarks, and previewed Titans, a long-context neural memory architecture scaling beyond 2 million tokens. These advances were shared by Google DeepMind and Google AI on Twitter.