All tags
Topic: "model-benchmarks"
not much happened today
gemma-3n glm-4.1v-thinking deepseek-r1t2 mini-max-m1 o3 claude-4-opus claude-sonnet moe-72b meta scale-ai unslothai zhipu-ai deepseek huawei minimax-ai allenai sakana-ai-labs openai model-performance vision conv2d float16 training-loss open-source model-benchmarks moe load-balancing scientific-literature-evaluation code-generation adaptive-tree-search synthesis-benchmarks alexandr_wang natfriedman steph_palazzolo thegregyang teortaxes_tex denny_zhou agihippo danielhanchen osanseviero reach_vb scaling01 ndea
Meta has hired Scale AI CEO Alexandr Wang as its new Chief AI Officer, acquiring a 49% non-voting stake in Scale AI for $14.3 billion, doubling its valuation to ~$28 billion. This move is part of a major talent shuffle involving Meta, OpenAI, and Scale AI. Discussions include the impact on Yann LeCun's influence at Meta and potential responses from OpenAI. In model news, Gemma 3N faces technical issues like vision NaNs and FP16 overflows, with fixes from UnslothAI. Chinese open-source models like GLM-4.1V-Thinking by Zhipu AI and DeepSeek R1T2 show strong performance and speed improvements. Huawei open-sourced a 72B MoE model with a novel load balancing solution. The MiniMax-M1 hybrid MoE model leads math benchmarks on the Text Arena leaderboard. AllenAI launched SciArena for scientific literature evaluation, where o3 outperforms others. Research from Sakana AI Labs introduces AB-MCTS for code generation, improving synthesis benchmarks.
not much happened today
gpt-image-1 o3 o4-mini gpt-4.1 dam openai google anthropic epoch ai research image-generation model-benchmarks vision-language-models music-ai ai-experiences ai-research supercomputers
AI news for April 23-24, 2025, covering new model releases, benchmarks, and research developments from companies like openai, google deepmind, anthropic, and epoch ai research.