All tags
Topic: "data-augmentation"
not much happened today
gemma-3-270m canary-1b parakeet-tdt-0.6b nemotron-nano-v2 qwen-image-edit dino-v3 nvidia alibaba tencent meta-ai-fair ibm datology synthetic-data multilingual-asr self-supervised-learning vision model-efficiency training-data data-augmentation model-speedup domain-transfer demishassabis adrgrondin rasbt reach_vb ctnzr clementdelangue natolambert _akhaliq itspaulai mervenoyann xenovacom tomaarsen pratyushmaini code_star leavittron k_schuerholt giffmana
Gemma 3 270M, an ultra-small model optimized for edge and mobile use, was released and is gaining adoption. NVIDIA launched two open multilingual ASR models, Canary 1B and Parakeet-TDT 0.6B, trained on 1 million hours of data with CC-BY licensing, plus the efficient Nemotron-Nano v2 9B model with significant speedups. Alibaba's Qwen-Image-Edit offers bilingual text editing and semantic image transformations. Tencent Hunyuan introduced a controllable game-world video generator trained on over 1 million gameplay recordings. Meta's DINOv3 presents a scalable self-supervised vision backbone with strong domain transfer capabilities. IBM quietly released efficient English embedding models under a commercial-friendly license. The BeyondWeb synthetic data paper shows significant training speed and performance gains over prior datasets. Analysis of HRM architecture suggests performance improvements largely stem from data augmentation and scaffolding rather than novel architecture. "Models and datasets are openly licensed and available on Hugging Face."
RouteLLM: RIP Martian? (Plus: AINews Structured Summaries update)
gpt-4 gemma-2-27b gemma-2-9b lmsys openai llm-routing cost-efficiency model-performance model-optimization data-augmentation syntax-based-routing mixture-of-experts inference-throughput software-2.0 computer-vision karpathy bindureddy armand-joulin
LMSys introduces RouteLLM, an open-source router framework trained on preference data from Chatbot Arena, achieving cost reductions over 85% on MT Bench, 45% on MMLU, and 35% on GSM8K while maintaining 95% of GPT-4's performance. This approach surpasses previous task-specific routing by using syntax-based Mixture of Experts (MoE) routing and data augmentation, beating commercial solutions by 40%. The update highlights advances in LLM routing, cost-efficiency, and model performance optimization across multiple models rather than single-model or MoE-level improvements. Additionally, the AI Twitter recap notes the Gemma 2 model family as a top open model, the Block Transformer architecture for improved inference throughput, and a proposal for a fully Software 2.0 computer vision system by karpathy.