All tags
Topic: "foundation-models"
Bartz v. Anthropic PBC β "Training use is Fair Use"
claude gemini-robotics-on-device anthropic replit delphi sequoia thinking-machines-lab disney universal midjourney google-deepmind fair-use copyright reinforcement-learning foundation-models robotics funding lawsuit digital-minds model-release andrea_bartz giffmana andrewcurran_ amasad swyx hwchase17 krandiash daraladje steph_palazzolo corbtt demishassabis
Anthropic won a significant fair use ruling allowing the training of Claude on copyrighted books, setting a precedent for AI training legality despite concerns over pirated data. Replit achieved a major milestone with $100M ARR, showing rapid growth. Delphi raised $16M Series A to scale digital minds, while Thinking Machines Lab focuses on reinforcement learning for business applications. Disney and Universal sued Midjourney over unauthorized use of copyrighted images. Google DeepMind released Gemini Robotics On-Device, a compact foundation model for robotics.
Not much happened today
mistral-small-3.2 magenta-realtime afm-4.5b llama-3 openthinker3-7b deepseek-r1-distill-qwen-7b storm qwen2-vl gpt-4o dino-v2 sakana-ai mistral-ai google arcee-ai deepseek-ai openai amazon gdm reinforcement-learning chain-of-thought fine-tuning function-calling quantization music-generation foundation-models reasoning text-video model-compression image-classification evaluation-metrics sama
Sakana AI released Reinforcement-Learned Teachers (RLTs), a novel technique using smaller 7B parameter models trained via reinforcement learning to teach reasoning through step-by-step explanations, accelerating Chain-of-Thought learning. Mistral AI updated Mistral Small 3.2 improving instruction following and function calling with experimental FP8 quantization. Google Magenta RealTime, an 800M parameter open-weights model for real-time music generation, was released. Arcee AI launched AFM-4.5B, a sub-10B parameter foundation model extended from Llama 3. OpenThinker3-7B was introduced as a new state-of-the-art 7B reasoning model with a 33% improvement over DeepSeek-R1-Distill-Qwen-7B. The STORM text-video model compresses video input by 8x using Mamba layers and outperforms GPT-4o on MVBench with 70.6%. Discussions on reinforcement learning algorithms PPO vs. GRPO and insights on DINOv2's performance on ImageNet-1k were also highlighted. "A very quiet day" in AI news with valuable workshops from OpenAI, Amazon, and GDM.
minor ai followups: MultiAgents, Meta-SSI-Scale, Karpathy, AI Engineer
gpt-4o afm-4.5b gemma qwen stt-1b-en_fr stt-2.6b-en hunyuan-3d-2.1 openai meta-ai-fair scale-ai huggingface tencent arcee-ai ai-safety alignment ai-regulation memory-optimization scalable-oversight speech-recognition 3d-generation foundation-models sama polynoamial neelnanda5 teortaxestex yoshua_bengio zachtratar ryanpgreenblatt reach_vb arankomatsuzaki code_star
OpenAI released a paper revealing how training models like GPT-4o on insecure code can cause broad misalignment, drawing reactions from experts like @sama and @polynoamial. California's AI regulation efforts were highlighted by @Yoshua_Bengio emphasizing transparency and whistleblower protections. The term "context rot" was coined to describe LLM conversation degradation, with systems like Embra using CRM-like memory for robustness. Scalable oversight research aiming to improve human control over smarter AIs was discussed by @RyanPGreenblatt. New model releases include Kyutai's speech-to-text models capable of 400 real-time streams on a single H100 GPU, Tencent's Hunyuan 3D 2.1 as the first open-source production-ready PBR 3D generative model, and Arcee's AFM-4.5B foundation model family targeting enterprise use, competitive with Gemma and Qwen.
Zuck goes Superintelligence Founder Mode: $100M bonuses + $100M+ salaries + NFDG Buyout?
llama-4 maverick scout minimax-m1 afm-4.5b chatgpt midjourney-v1 meta-ai-fair openai deeplearning-ai essential-ai minimax arcee midjourney long-context multimodality model-release foundation-models dataset-release model-training video-generation enterprise-ai model-architecture moe prompt-optimization sama nat dan ashvaswani clementdelangue amit_sangani andrewyng _akhaliq
Meta AI is reportedly offering 8-9 figure signing bonuses and salaries to top AI talent, confirmed by Sam Altman. They are also targeting key figures like Nat and Dan from the AI Grant fund for strategic hires. Essential AI released the massive 24-trillion-token Essential-Web v1.0 dataset with rich metadata and a 12-category taxonomy. DeepLearning.AI and Meta AI launched a course on Llama 4, featuring new MoE models Maverick (400B) and Scout (109B) with context windows up to 10M tokens. MiniMax open-sourced MiniMax-M1, a long-context LLM with a 1M-token window, and introduced the Hailuo 02 video model. OpenAI rolled out "Record mode" for ChatGPT Pro, Enterprise, and Edu on macOS. Arcee launched the AFM-4.5B foundation model for enterprise. Midjourney released its V1 video model enabling image animation. These developments highlight major advances in model scale, long-context reasoning, multimodality, and enterprise AI applications.
not much happened today
seedance-1.0 codex claude-code kling-2.1 veo-3 bytedance morph-labs huggingface deeplearning.ai figure-ai langchain sakana-ai video-generation autoformalization ai-assisted-coding api-design context-engineering reinforcement-learning ai-evals hypernetworks model-fine-tuning foundation-models andrew_ng hwchase17 adcock_brett clementdelangue akhaliq jxmnop hamelhusain sh_reya
Bytedance showcased an impressive state-of-the-art video generation model called Seedance 1.0 without releasing it, while Morph Labs announced Trinity, an autoformalization system for Lean. Huggingface Transformers deprecated Tensorflow/JAX support. Andrew Ng of DeepLearning.AI highlighted the rise of the GenAI Application Engineer role emphasizing skills in AI building blocks and AI-assisted coding tools like Codex and Claude Code. Engineering teams are increasingly testing API designs against LLMs for usability. Figure AI's CEO stressed speed as a key competitive advantage, and LangChain introduced the concept of Context Engineering for AI agents. Reinforcement learning on LLMs shows transformative potential, and the community values AI evals and data work. Sakana AI released Text-to-LoRA, a hypernetwork method for generating task-specific LoRA adapters from natural language, enabling efficient model customization. The video generation race heats up with Bytedance's Seed-based model praised for quality, challenging American labs, alongside models like Kling 2.1 and Veo 3.
Apple exposes Foundation Models API and... no new Siri
chatgpt apple openai langchain llamaindex on-device-ai foundation-models reasoning reinforcement-learning voice translation software-automation agentic-workflows gdb scaling01 giffmana kevinweil
Apple released on-device foundation models for iOS developers, though their recent "Illusion of Reasoning" paper faced significant backlash for flawed methodology regarding LLM reasoning. OpenAI updated ChatGPT's Advanced Voice Mode with more natural voice and improved translation, demonstrated by Greg Brockman. LangChain and LlamaIndex launched new AI agents and tools, including a SWE Agent for software automation and an Excel agent using reinforcement learning for data transformation. The AI community engaged in heated debate over reasoning capabilities of LLMs, highlighting challenges in evaluation methods.
AI Engineer World's Fair: Second Run, Twice The Fun
gemini-2.5-pro google-deepmind waymo tesla anthropic braintrust retrieval-augmentation graph-databases recommendation-systems software-engineering-agents agent-reliability reinforcement-learning voice image-generation video-generation infrastructure security evaluation ai-leadership enterprise-ai mcp tiny-teams product-management design-engineering robotics foundation-models coding web-development demishassabis
The 2025 AI Engineer World's Fair is expanding with 18 tracks covering topics like Retrieval + Search, GraphRAG, RecSys, SWE-Agents, Agent Reliability, Reasoning + RL, Voice AI, Generative Media, Infrastructure, Security, and Evals. New focuses include MCP, Tiny Teams, Product Management, Design Engineering, and Robotics and Autonomy featuring foundation models from Waymo, Tesla, and Google. The event highlights the growing importance of AI Architects and enterprise AI leadership. Additionally, Demis Hassabis announced the Gemini 2.5 Pro Preview 'I/O edition', which leads coding and web development benchmarks on LMArena.
Every 7 Months: The Moore's Law for Agent Autonomy
claude-3-7-sonnet llama-4 phi-4-multimodal gpt-2 cosmos-transfer1 gr00t-n1-2b orpheus-3b metr nvidia hugging-face canopy-labs meta-ai-fair microsoft agent-autonomy task-completion multimodality text-to-speech robotics foundation-models model-release scaling-laws fine-tuning zero-shot-learning latency reach_vb akhaliq drjimfan scaling01
METR published a paper measuring AI agent autonomy progress, showing it has doubled every 7 months since 2019 (GPT-2). They introduced a new metric, the 50%-task-completion time horizon, where models like Claude 3.7 Sonnet achieve 50% success in about 50 minutes. Projections estimate 1 day autonomy by 2028 and 1 month autonomy by late 2029. Meanwhile, Nvidia released Cosmos-Transfer1 for conditional world generation and GR00T-N1-2B, an open foundation model for humanoid robot reasoning with 2B parameters. Canopy Labs introduced Orpheus 3B, a high-quality text-to-speech model with zero-shot voice cloning and low latency. Meta reportedly delayed Llama-4 release due to performance issues. Microsoft launched Phi-4-multimodal.
The Ultra-Scale Playbook: Training LLMs on GPU Clusters
deepseek-native-sparse-attention r1-1776 paligemma-2-mix muse baichuan-m1-14b stripedhyena-2 huggingface deepseek perplexity-ai google-deepmind microsoft baichuan stripedhyena gpu-training scaling multimodality vision model-training foundation-models medical-llm genome-modeling robotic-manipulation interactive-content eliebakouch nouamanetazi lvwerra thom-wolf proftomyeh alex-wang aravsrinivas _akhaliq _philschmid mervenoyann reach_vb arankomatsuzaki maximelabonne
Huggingface released "The Ultra-Scale Playbook: Training LLMs on GPU Clusters," an interactive blogpost based on 4000 scaling experiments on up to 512 GPUs, providing detailed insights into modern GPU training strategies. DeepSeek introduced the Native Sparse Attention (NSA) model, gaining significant community attention, while Perplexity AI launched R1-1776, an uncensored and unbiased version of DeepSeek's R1 model. Google DeepMind unveiled PaliGemma 2 Mix, a multi-task vision-language model available in 3B, 10B, and 28B sizes. Microsoft introduced Muse, a generative AI model trained on the game Bleeding Edge, and presented Magma, a foundation model for multimodal AI agents excelling in UI navigation and robotic manipulation. Baichuan-M1-14B was announced as a state-of-the-art medical LLM trained on 20T tokens, and a fully open-source 40B genome modeling model using StripedHyena 2 architecture was also released. "Making your own gaming experience is coming sooner than you'd think," noted in relation to Muse.
not much happened today
cosmos nvidia openai robotics autonomous-driving open-source fine-tuning foundation-models memory-optimization sama
NVIDIA has launched Cosmos, an open-source video world model trained on 20 million hours of video, aimed at advancing robotics and autonomous driving. The release sparked debate over its open-source status and technical approach. Additionally, NVIDIA announced Digits, a $3,000 personal AI supercomputer designed to democratize AI computing. The AI community expresses mixed feelings about rapid AI progress, with concerns about AGI, job displacement, and investment hype. Discussions also highlight upcoming tools for fine-tuning AI models at home and foundation models for AI robotics.
Liquid Foundation Models: A New Transformers alternative + AINews Pod 2
llama-3-2 gemini-1.5-pro-002 gemini-1.5-flash-002 liquid-ai meta-ai-fair google-deepmind openai reinforcement-learning multimodality model-efficiency foundation-models audio-processing model-deployment open-source ylecun svpino
Liquid.ai emerged from stealth with three subquadratic foundation models demonstrating superior efficiency compared to state space models and Appleβs on-device and server models, backed by a $37M seed round. Meta AI announced Llama 3.2 with multimodal vision-enabled models and lightweight text-only variants for mobile. Google DeepMind introduced production-ready Gemini-1.5-Pro-002 and Gemini-1.5-Flash-002 models with improved pricing and rate limits, alongside AlphaChip, an AI-driven chip design system using reinforcement learning for rapid superhuman layouts. OpenAI enhanced ChatGPT Plus and Teams with Advanced Voice Mode featuring Custom Instructions, Memory, and new nature-inspired voices. California Governor vetoed SB-1047 AI regulation bill, celebrated by AI community figures like ylecun and svpino as a win for open-source AI. Google upgraded NotebookLM with audio overviews supporting YouTube and audio files, turning documents into AI-generated podcasts. "Open source in AI is thriving," noted ylecun, highlighting 1 million models on Github and HuggingFace.
Multi-modal, Multi-Aspect, Multi-Form-Factor AI
gpt-4 idefics-2-8b mistral-instruct apple-mlx gpt-5 reka-ai cohere google rewind apple mistral-ai microsoft paypal multimodality foundation-models embedding-models gpu-performance model-comparison enterprise-data open-source performance-optimization job-impact agi-criticism technical-report arthur-mensch dan-schulman chris-bishop
Between April 12-15, Reka Core launched a new GPT4-class multimodal foundation model with a detailed technical report described as "full Shazeer." Cohere Compass introduced a foundation embedding model for indexing and searching multi-aspect enterprise data like emails and invoices. The open-source IDEFICS 2-8B model continues Google's Flamingo multimodal model reproduction. Rewind pivoted to a multi-platform app called Limitless, moving away from spyware. Reddit discussions highlighted Apple MLX outperforming Ollama and Mistral Instruct on M2 Ultra GPUs, GPU choices for LLMs and Stable Diffusion, and AI-human comparisons by Microsoft Research's Chris Bishop. Former PayPal CEO Dan Schulman predicted GPT-5 will drastically reduce job scopes by 80%. Mistral CEO Arthur Mensch criticized the obsession with AGI as "creating God."
Mergestral, Meta MTIAv2, Cohere Rerank 3, Google Infini-Attention
mistral-8x22b command-r-plus rerank-3 infini-attention llama-3 sd-1.5 cosxl meta-ai-fair mistral-ai cohere google stability-ai hugging-face ollama model-merging training-accelerators retrieval-augmented-generation linear-attention long-context foundation-models image-generation rag-pipelines model-benchmarking context-length model-performance aidan_gomez ylecun swyx
Meta announced their new MTIAv2 chips designed for training and inference acceleration with improved architecture and integration with PyTorch 2.0. Mistral released the 8x22B Mixtral model, which was merged back into a dense model to effectively create a 22B Mistral model. Cohere launched Rerank 3, a foundation model enhancing enterprise search and retrieval-augmented generation (RAG) systems supporting 100+ languages. Google published a paper on Infini-attention, an ultra-scalable linear attention mechanism demonstrated on 1B and 8B models with 1 million sequence length. Additionally, Meta's Llama 3 is expected to start rolling out soon. Other notable updates include Command R+, an open model surpassing GPT-4 in chatbot performance with 128k context length, and advancements in Stable Diffusion models and RAG pipelines.
World_sim.exe
gpt-4 gpt-4o grok-1 llama-cpp claude-3-opus claude-3 gpt-5 nvidia nous-research stability-ai hugging-face langchain anthropic openai multimodality foundation-models hardware-optimization model-quantization float4 float6 retrieval-augmented-generation text-to-video prompt-engineering long-form-rag gpu-optimization philosophy-of-ai agi-predictions jensen-huang yann-lecun sam-altman
NVIDIA announced Project GR00T, a foundation model for humanoid robot learning using multimodal instructions, built on their tech stack including Isaac Lab, OSMO, and Jetson Thor. They revealed the DGX Grace-Blackwell GB200 with over 1 exaflop compute, capable of training GPT-4 1.8T parameters in 90 days on 2000 Blackwells. Jensen Huang confirmed GPT-4 has 1.8 trillion parameters. The new GB200 GPU supports float4/6 precision with ~3 bits per parameter and achieves 40,000 TFLOPs on fp4 with 2x sparsity.
Open source highlights include the release of Grok-1, a 340B parameter model, and Stability AI's SV3D, an open-source text-to-video generation solution. Nous Research collaborated on implementing Steering Vectors in Llama.CPP.
In Retrieval Augmented Generation (RAG), a new 5.5-hour tutorial builds a pipeline using open-source HF models, and LangChain released a video on query routing and announced integration with NVIDIA NIM for GPU-optimized LLM inference.
Prominent opinions include Yann LeCun distinguishing language from other cognitive abilities, Sam Altman predicting AGI arrival in 6 years with a leap from GPT-4 to GPT-5 comparable to GPT-3 to GPT-4, and discussions on the philosophical status of LLMs like Claude. There is also advice against training models from scratch for most companies.