All tags
Topic: "leaderboards"
not much happened today
claude-code gemini qwen3-coder gemini-2.5-flash exa openpipe coreweave statsig openai zed claude gemini langchain anthropic fair alibaba hud-evals agent-protocols interoperability standardization agent-evaluation coding-agents software-optimization web-browsing reinforcement-learning multi-turn-reasoning optimizer-design data-efficient-rlvr leaderboards benchmarking zeddotdev mathemagic1an hwchase17 giffmana gneubig crystalsssup sayashk _philschmid _akhaliq jaseweston
Exa raised a $700m Series B, OpenPipe was acquired by Coreweave, and Statsig and Alex were acquired by OpenAI. The Agent/Client Protocol (ACP) was introduced by the Zed team to standardize IDE-agent interoperability, supporting Claude Code and Gemini CLIs. LangChain 1.0 alpha unifies content blocks for reasoning and multimodal data. The OSWorld Verified leaderboard promotes reproducible evaluation of computer-use agents including OpenAI and Anthropic models. FAIR revealed coding agent cheating on SWE-Bench Verified. PR Arena hosts live coding agent competitions. Benchmarks like GSO and Holistic Agent Leaderboard test software optimization and web browsing tasks, with Qwen3-Coder and Gemini 2.5 Flash showing strong performance. Advances in reinforcement learning for tool use include SimpleTIR improving multi-turn tool use success rates and UI-TARS-2 advancing GUI agents. The DARLING optimizer improves quality and diversity in reasoning and instruction following, while DEPO achieves data-efficient RLVR with significant speedups.
Gemini 2.5 Pro + 4o Native Image Gen
gemini-2.5-pro gpt-4o google-deepmind openai lmarena_ai autoregressive-models multimodality reasoning coding instruction-following model-release leaderboards noam-shazeer allan-jabri gabe-goh
Gemini 2.5 Pro from Google DeepMind has become the new top AI model, surpassing Grok 3 by 40 LMarena points, with contributions from Noam Shazeer integrating Flash Thinking techniques. It is available as a free, rate-limited experimental model. Meanwhile, OpenAI released GPT 4o Native Images, an autoregressive image generation model with detailed insights shared by Allan Jabri and credits to Gabe Goh. Gemini 2.5 Pro excels in reasoning, coding, STEM, multimodal tasks, and instruction following, topping the LMarena leaderboard significantly. It is accessible via Google AI Studio and the Gemini App.
Gemma 2 2B + Scope + Shield
gemma-2b gemma-2-9b gemma-2-27b llama-3-1-405b sam-2 gpt-3.5 vicuna alpacaeval g-eval google-deepmind anthropic meta-ai-fair openai perplexity-ai nvidia lmsys knowledge-distillation leaderboards model-interpretability finetuning harm-detection video-segmentation voice publishers-program robotics-data-scaling quantization llm-evaluation prompt-engineering
Gemma 2B, a 2 billion parameter model trained on 2 trillion tokens and distilled from a larger unnamed LLM, has been released by Google DeepMind and shows strong leaderboard performance despite weaknesses in math. The Gemma series, including 9B and 27B models, has gained popularity since its June release. The team also released 400 SAEs for interpretability, inspired by Anthropic's research. A finetuned classifier called ShieldGemma outperforms Meta's LlamaGuard in harm detection. Meanwhile, Meta AI announced Llama-3.1-405B reaching #3 on the Overall Arena leaderboard, and released SAM 2, a video and image segmentation model with significant speed improvements. OpenAI is rolling out an advanced Voice Mode to Plus users. Perplexity AI launched a Publishers Program with major media partners and a status page. NVIDIA introduced Project GR00T for scaling robot data using Apple Vision Pro and generative simulation. Interest in quantization for compressing LLMs is growing, and LLM-as-a-Judge implementations from Vicuna, AlpacaEval, and G-Eval highlight the effectiveness of simple prompts and domain-specific evaluation.
LMSys advances Llama 3 eval analysis
llama-3-70b llama-3 claude-3-sonnet alphafold-3 lmsys openai google-deepmind isomorphic-labs benchmarking model-behavior prompt-complexity model-specification molecular-structure-prediction performance-analysis leaderboards demis-hassabis sam-altman miranda-murati karina-nguyen joanne-jang john-schulman
LMSys is enhancing LLM evaluation by categorizing performance across 8 query subcategories and 7 prompt complexity levels, revealing uneven strengths in models like Llama-3-70b. DeepMind released AlphaFold 3, advancing molecular structure prediction with holistic modeling of protein-DNA-RNA complexes, impacting biology and genetics research. OpenAI introduced the Model Spec, a public standard to clarify model behavior and tuning, inviting community feedback and aiming for models to learn directly from it. Llama 3 has reached top leaderboard positions on LMSys, nearly matching Claude-3-sonnet in performance, with notable variations on complex prompts. The analysis highlights the evolving landscape of model benchmarking and behavior shaping.
1/3/2024: RIP Coqui
sdxl diffusers-0.25 coqui mozilla hugging-face google text-to-speech performance-optimization token-management transformer-architecture image-datasets web-crawling pytorch leaderboards
Coqui, a prominent open source text-to-speech project from the Mozilla ML group, officially shut down. Discussions in the HuggingFace Discord highlighted skepticism about the claimed
3X faster
speed of sdxl, attributing improvements more to techniques like torch.compile
and removal of fp16
and attention
rather than diffusers 0.25 features. Users confirmed that a HuggingFace user token can be used across multiple machines, though distinct tokens are recommended for safety. The Learning Loss Minimization (LLM) Leaderboard briefly experienced issues but was later confirmed operational. A Kaggle notebook was shared demonstrating how to build Transformer architectures from scratch using PyTorch. Additionally, a new image dataset with 15k shoe, sandal, and boot images was introduced for multiclass classification tasks. Explanations about the workings of the Common Crawl web-crawling process were also shared.