All tags
Person: "code_star"
not much happened today
gemma-3-270m canary-1b parakeet-tdt-0.6b nemotron-nano-v2 qwen-image-edit dino-v3 nvidia alibaba tencent meta-ai-fair ibm datology synthetic-data multilingual-asr self-supervised-learning vision model-efficiency training-data data-augmentation model-speedup domain-transfer demishassabis adrgrondin rasbt reach_vb ctnzr clementdelangue natolambert _akhaliq itspaulai mervenoyann xenovacom tomaarsen pratyushmaini code_star leavittron k_schuerholt giffmana
Gemma 3 270M, an ultra-small model optimized for edge and mobile use, was released and is gaining adoption. NVIDIA launched two open multilingual ASR models, Canary 1B and Parakeet-TDT 0.6B, trained on 1 million hours of data with CC-BY licensing, plus the efficient Nemotron-Nano v2 9B model with significant speedups. Alibaba's Qwen-Image-Edit offers bilingual text editing and semantic image transformations. Tencent Hunyuan introduced a controllable game-world video generator trained on over 1 million gameplay recordings. Meta's DINOv3 presents a scalable self-supervised vision backbone with strong domain transfer capabilities. IBM quietly released efficient English embedding models under a commercial-friendly license. The BeyondWeb synthetic data paper shows significant training speed and performance gains over prior datasets. Analysis of HRM architecture suggests performance improvements largely stem from data augmentation and scaffolding rather than novel architecture. "Models and datasets are openly licensed and available on Hugging Face."
minor ai followups: MultiAgents, Meta-SSI-Scale, Karpathy, AI Engineer
gpt-4o afm-4.5b gemma qwen stt-1b-en_fr stt-2.6b-en hunyuan-3d-2.1 openai meta-ai-fair scale-ai huggingface tencent arcee-ai ai-safety alignment ai-regulation memory-optimization scalable-oversight speech-recognition 3d-generation foundation-models sama polynoamial neelnanda5 teortaxestex yoshua_bengio zachtratar ryanpgreenblatt reach_vb arankomatsuzaki code_star
OpenAI released a paper revealing how training models like GPT-4o on insecure code can cause broad misalignment, drawing reactions from experts like @sama and @polynoamial. California's AI regulation efforts were highlighted by @Yoshua_Bengio emphasizing transparency and whistleblower protections. The term "context rot" was coined to describe LLM conversation degradation, with systems like Embra using CRM-like memory for robustness. Scalable oversight research aiming to improve human control over smarter AIs was discussed by @RyanPGreenblatt. New model releases include Kyutai's speech-to-text models capable of 400 real-time streams on a single H100 GPU, Tencent's Hunyuan 3D 2.1 as the first open-source production-ready PBR 3D generative model, and Arcee's AFM-4.5B foundation model family targeting enterprise use, competitive with Gemma and Qwen.