All tags
Person: "natolambert"
not much happened today
gpt-5.3-codex claude-opus-4.6 openai anthropic cursor_ai github microsoft builder-tooling cybersecurity api-access model-rollout agentic-ai long-context serving-economics throughput-latency token-efficiency workflow-design sama pierceboggan kylebrussell natolambert omarsar0 sam_altman
OpenAI launched GPT-5.3-Codex with a Super Bowl ad emphasizing "You can just build things" as a product strategy, focusing on builder tooling over chat interfaces. The model is rolling out across Cursor, VS Code, and GitHub with phased API access and is flagged as their first "high cybersecurity capability" model. Sam Altman reported over 1M Codex app downloads in the first week and strong weekly user growth. Meanwhile, Anthropic's Claude Opus 4.6 is recognized as a leading "agentic generalist" model, topping text and code leaderboards but noted for high token usage. Discussions around serving economics and "fast mode" behavior highlight practical deployment considerations. Additionally, Recursive Language Models (RLMs) introduce a novel approach using a second programmatic context space to extend long-context capabilities.
not much happened today
gemma-3-270m canary-1b parakeet-tdt-0.6b nemotron-nano-v2 qwen-image-edit dino-v3 nvidia alibaba tencent meta-ai-fair ibm datology synthetic-data multilingual-asr self-supervised-learning vision model-efficiency training-data data-augmentation model-speedup domain-transfer demishassabis adrgrondin rasbt reach_vb ctnzr clementdelangue natolambert _akhaliq itspaulai mervenoyann xenovacom tomaarsen pratyushmaini code_star leavittron k_schuerholt giffmana
Gemma 3 270M, an ultra-small model optimized for edge and mobile use, was released and is gaining adoption. NVIDIA launched two open multilingual ASR models, Canary 1B and Parakeet-TDT 0.6B, trained on 1 million hours of data with CC-BY licensing, plus the efficient Nemotron-Nano v2 9B model with significant speedups. Alibaba's Qwen-Image-Edit offers bilingual text editing and semantic image transformations. Tencent Hunyuan introduced a controllable game-world video generator trained on over 1 million gameplay recordings. Meta's DINOv3 presents a scalable self-supervised vision backbone with strong domain transfer capabilities. IBM quietly released efficient English embedding models under a commercial-friendly license. The BeyondWeb synthetic data paper shows significant training speed and performance gains over prior datasets. Analysis of HRM architecture suggests performance improvements largely stem from data augmentation and scaffolding rather than novel architecture. "Models and datasets are openly licensed and available on Hugging Face."