All tags
Model: "openthinker3-7b"
Not much happened today
mistral-small-3.2 magenta-realtime afm-4.5b llama-3 openthinker3-7b deepseek-r1-distill-qwen-7b storm qwen2-vl gpt-4o dino-v2 sakana-ai mistral-ai google arcee-ai deepseek-ai openai amazon gdm reinforcement-learning chain-of-thought fine-tuning function-calling quantization music-generation foundation-models reasoning text-video model-compression image-classification evaluation-metrics sama
Sakana AI released Reinforcement-Learned Teachers (RLTs), a novel technique using smaller 7B parameter models trained via reinforcement learning to teach reasoning through step-by-step explanations, accelerating Chain-of-Thought learning. Mistral AI updated Mistral Small 3.2 improving instruction following and function calling with experimental FP8 quantization. Google Magenta RealTime, an 800M parameter open-weights model for real-time music generation, was released. Arcee AI launched AFM-4.5B, a sub-10B parameter foundation model extended from Llama 3. OpenThinker3-7B was introduced as a new state-of-the-art 7B reasoning model with a 33% improvement over DeepSeek-R1-Distill-Qwen-7B. The STORM text-video model compresses video input by 8x using Mamba layers and outperforms GPT-4o on MVBench with 70.6%. Discussions on reinforcement learning algorithms PPO vs. GRPO and insights on DINOv2's performance on ImageNet-1k were also highlighted. "A very quiet day" in AI news with valuable workshops from OpenAI, Amazon, and GDM.
Gemini 2.5 Pro (06-05) launched at AI Engineer World's Fair
gemini-2.5-pro qwen3-embedding-8b openthinker3-7b google qwen lighton morph-labs openai nvidia benchmarking reasoning coding math embedding-models late-interaction dataset-release model-performance model-architecture ai-conferences greg_brockman jensen_huang christian_szegedy swyx
At the second day of AIE, Google's Gemini 2.5 Pro reclaimed the top spot on the LMArena leaderboard with a score of 1470 and a +24 Elo increase, showing improvements in coding, reasoning, and math. Qwen3 released state-of-the-art embedding and reranking models, with Qwen3-Embedding-8B topping the MTEB multilingual leaderboard. OpenThinker3-7B emerged as the top open reasoning model trained on the OpenThoughts3-1.2M dataset, outperforming previous models by 33%. LightOn introduced FastPlaid, achieving up to a 554% speedup for late-interaction models. Morph Labs hired Christian Szegedy as Chief Scientist to lead Verified Superintelligence development. The AI Engineer World's Fair featured a fireside chat with Greg Brockman and NVIDIA CEO Jensen Huang, highlighting the return of basic research and engineering best practices.