All tags
Topic: "cost-efficiency"
>$41B raised today (OpenAI @ 300b, Cursor @ 9.5b, Etched @ 1.5b)
deepseek-v3-0324 gemini-2.5-pro claude-3.7-sonnet openai deepseek gemini cursor etched skypilot agent-evals open-models model-releases model-performance coding multimodality model-deployment cost-efficiency agent-evaluation privacy kevinweil sama lmarena_ai scaling01 iscienceluvr stevenheidel lepikhin dzhng raizamrtn karpathy
OpenAI is preparing to release a highly capable open language model, their first since GPT-2, with a focus on reasoning and community feedback, as shared by @kevinweil and @sama. DeepSeek V3 0324 has achieved the #5 spot on the Arena leaderboard, becoming the top open model with an MIT license and cost advantages. Gemini 2.5 Pro is noted for outperforming models like Claude 3.7 Sonnet in coding tasks, with upcoming pricing and improvements expected soon. New startups like Sophont are building open multimodal foundation models for healthcare. Significant fundraises include Cursor closing $625M at a $9.6B valuation and Etched raising $85M at $1.5B. Innovations in AI infrastructure include SkyPilot's cost-efficient cloud provisioning and the launch of AgentEvals, an open-source package for evaluating AI agents. Discussions on smartphone privacy highlight iPhone's stronger user defense compared to Android.
Gemini 2.0 Flash GA, with new Flash Lite, 2.0 Pro, and Flash Thinking
gemini-2.0-flash gemini-2.0-flash-lite gemini-2.0-pro-experimental gemini-1.5-pro deepseek-r1 gpt-2 llama-3-1 google-deepmind hugging-face anthropic multimodality context-windows cost-efficiency pretraining fine-tuning reinforcement-learning transformer tokenization embeddings mixture-of-experts andrej-karpathy jayalammar maartengr andrewyng nearcyan
Google DeepMind officially launched Gemini 2.0 models including Flash, Flash-Lite, and Pro Experimental, with Gemini 2.0 Flash outperforming Gemini 1.5 Pro while being 12x cheaper and supporting multimodal input and a 1 million token context window. Andrej Karpathy released a 3h31m video deep dive into large language models, covering pretraining, fine-tuning, and reinforcement learning with examples like GPT-2 and Llama 3.1. A free course on Transformer architecture was introduced by Jay Alammar, Maarten Gr, and Andrew Ng, focusing on tokenizers, embeddings, and mixture-of-expert models. DeepSeek-R1 reached 1.2 million downloads on Hugging Face with a detailed 36-page technical report. Anthropic increased rewards to $10K and $20K for their jailbreak challenge, while BlueRaven extension was updated to hide Twitter metrics for unbiased engagement.
o3-mini launches, OpenAI on "wrong side of history"
o3-mini o1 gpt-4o mistral-small-3-24b deepseek-r1 openai mistral-ai deepseek togethercompute fireworksai_hq ai-gradio replicate reasoning safety cost-efficiency model-performance benchmarking api open-weight-models model-releases sam-altman
OpenAI released o3-mini, a new reasoning model available for free and paid users with a "high" reasoning effort option that outperforms the earlier o1 model on STEM tasks and safety benchmarks, costing 93% less per token. Sam Altman acknowledged a shift in open source strategy and credited DeepSeek R1 for influencing assumptions. MistralAI launched Mistral Small 3 (24B), an open-weight model with competitive performance and low API costs. DeepSeek R1 is supported by Text-generation-inference v3.1.0 and available via ai-gradio and replicate. The news highlights advancements in reasoning, cost-efficiency, and safety in AI models.
o3 solves AIME, GPQA, Codeforces, makes 11 years of progress in ARC-AGI and 25% in FrontierMath
o3 o3-mini o1-mini gpt-3 gpt-4o o1 openai benchmarking math reasoning model-performance inference-speed cost-efficiency alignment safety-testing sama eric-wallace
OpenAI announced the o3 and o3-mini models with groundbreaking benchmark results, including a jump from 2% to 25% on the FrontierMath benchmark and 87.5% on the ARC-AGI reasoning benchmark, representing about 11 years of progress on the GPT3 to GPT4o scaling curve. The o1-mini model shows superior inference efficiency compared to o3-full, promising significant cost reductions on coding tasks. The announcement was accompanied by community discussions, safety testing applications, and detailed analyses. Sama highlighted the unusual cost-performance tradeoff, and Eric Wallace shared insights on the o-series deliberative alignment strategy.
DataComp-LM: the best open-data 7B model/benchmark/dataset
mistral-nemo-12b gpt-4o-mini deepseek-v2-0628 mistral-7b llama-3 gemma-2 qwen-2 datacomp hugging-face openai nvidia mistral-ai deepseek dataset-design scaling-laws model-benchmarking model-performance fine-tuning multilinguality function-calling context-windows open-source-models model-optimization cost-efficiency benchmarking sam-altman guillaume-lample philschmid miramurati
DataComp team released a competitive 7B open data language model trained on only 2.5T tokens from the massive DCLM-POOL dataset of 240 trillion tokens, showing superior scaling trends compared to FineWeb. OpenAI launched GPT-4o mini, a cost-effective model with 82% MMLU and performance near GPT-4-Turbo, aimed at developers for broad applications. NVIDIA and Mistral jointly released the Mistral NeMo 12B model featuring a 128k token context window, FP8 checkpoint, multilingual support, and Apache 2.0 licensing. DeepSeek announced DeepSeek-V2-0628 as the top open-source model on the LMSYS Chatbot Arena leaderboard with strong rankings in coding, math, and hard prompts. This news highlights advances in dataset design, model efficiency, and open-source contributions in the AI community.
Mini, Nemo, Turbo, Lite - Smol models go brrr (GPT4o-mini version)
gpt-4o-mini deepseek-v2-0628 mistral-nemo llama-8b openai deepseek-ai mistral-ai nvidia meta-ai-fair hugging-face langchain keras cost-efficiency context-windows open-source benchmarking neural-networks model-optimization text-generation fine-tuning developer-tools gpu-support parallelization cuda-integration multilinguality long-context article-generation liang-wenfeng
OpenAI launched the GPT-4o Mini, a cost-efficient small model priced at $0.15 per million input tokens and $0.60 per million output tokens, aiming to replace GPT-3.5 Turbo with enhanced intelligence but some performance limitations. DeepSeek open-sourced DeepSeek-V2-0628, topping the LMSYS Chatbot Arena Leaderboard and emphasizing their commitment to contributing to the AI ecosystem. Mistral AI and NVIDIA released the Mistral NeMo, a 12B parameter multilingual model with a record 128k token context window under an Apache 2.0 license, sparking debates on benchmarking accuracy against models like Meta Llama 8B. Research breakthroughs include the TextGrad framework for optimizing compound AI systems via textual feedback differentiation and the STORM system improving article writing by 25% through simulating diverse perspectives and addressing source bias. Developer tooling trends highlight LangChain's evolving context-aware reasoning applications and the Modular ecosystem's new official GPU support, including discussions on Mojo and Keras 3.0 integration.
Mini, Nemo, Turbo, Lite - Smol models go brrr (GPT4o version)
gpt-4o-mini mistral-nemo llama-3 llama-3-400b deepseek-v2 openai nvidia mistral-ai togethercompute deepseek-ai lmsys model-quantization context-windows instruction-following model-performance cost-efficiency multimodality benchmarking open-source model-release sam-altman
GPT-4o-mini launches with a 99% price reduction compared to text-davinci-003, offering 3.5% the price of GPT-4o and matching Opus-level benchmarks. It supports 16k output tokens, is faster than previous models, and will soon support text, image, video, and audio inputs and outputs. Mistral Nemo, a 12B parameter model developed with Nvidia, features a 128k token context window, FP8 checkpoint, and strong benchmark performance. Together Lite and Turbo offer fp8/int4 quantizations of Llama 3 with up to 4x throughput and significantly reduced costs. DeepSeek V2 is now open-sourced. Upcoming releases include at least 5 unreleased models and Llama 4 leaks ahead of ICML 2024.
RouteLLM: RIP Martian? (Plus: AINews Structured Summaries update)
gpt-4 gemma-2-27b gemma-2-9b lmsys openai llm-routing cost-efficiency model-performance model-optimization data-augmentation syntax-based-routing mixture-of-experts inference-throughput software-2.0 computer-vision karpathy bindureddy armand-joulin
LMSys introduces RouteLLM, an open-source router framework trained on preference data from Chatbot Arena, achieving cost reductions over 85% on MT Bench, 45% on MMLU, and 35% on GSM8K while maintaining 95% of GPT-4's performance. This approach surpasses previous task-specific routing by using syntax-based Mixture of Experts (MoE) routing and data augmentation, beating commercial solutions by 40%. The update highlights advances in LLM routing, cost-efficiency, and model performance optimization across multiple models rather than single-model or MoE-level improvements. Additionally, the AI Twitter recap notes the Gemma 2 model family as a top open model, the Block Transformer architecture for improved inference throughput, and a proposal for a fully Software 2.0 computer vision system by karpathy.
Is this... OpenQ*?
deepseek-coder-v2 llama-3-8b nemotron-4-340b stable-diffusion-3-medium deepseek_ai anthropic runwayml openai apple nvidia stability-ai luma-labs reward-tampering test-time-search mathematical-reasoning process-supervision fine-tuning on-device-ai video-generation cost-efficiency context-length coding image-understanding multimodality adcock_brett clementdelangue svpino
DeepSeekCoder V2 promises GPT4T-beating performance at a fraction of the cost. Anthropic released new research on reward tampering. Runway launched their Sora response and Gen-3 Alpha video generation model. A series of papers explore "test-time" search techniques improving mathematical reasoning with models like LLaMa-3 8B. Apple announced Apple Intelligence with smarter Siri and image/document understanding, partnered with OpenAI to integrate ChatGPT into iOS 18, and released 20 new CoreML models with LoRA fine-tuning for specialization. NVIDIA released Nemotron-4 340B, an open model matching GPT-4 performance. DeepSeek-Coder-V2 excels in coding and math with 338 programming languages and 128K context length. Stability AI released Stable Diffusion 3 Medium weights. Luma Labs launched Dream Machine for 5-second video generation from text and images.
Claude 3 is officially America's Next Top Model
claude-3-opus claude-3-sonnet claude-3-haiku gpt-4o-mini mistral-7b qwen-72b anthropic mistral-ai huggingface openrouter stable-diffusion automatic1111 comfyui fine-tuning model-merging alignment ai-ethics benchmarking model-performance long-context cost-efficiency model-evaluation mark_riedl ethanjperez stuhlmueller ylecun aravsrinivas
Claude 3 Opus outperforms GPT4T and Mistral Large in blind Elo rankings, with Claude 3 Haiku marking a new cost-performance frontier. Fine-tuning techniques like QLoRA on Mistral 7B and evolutionary model merging on HuggingFace models are highlighted. Public opinion shows strong opposition to ASI development. Research supervision opportunities in AI alignment are announced. The Stable Diffusion 3 (SD3) release raises workflow concerns for tools like ComfyUI and automatic1111. Opus shows a 5% performance dip on OpenRouter compared to the Anthropic API. A new benchmark stresses LLM recall at long contexts, with Mistral 7B struggling and Qwen 72b performing well.
Claude 3 just destroyed GPT 4 (see for yourself)
claude-3 claude-3-opus claude-3-sonnet claude-3-haiku gpt-4 anthropic amazon google claude-ai multimodality vision long-context model-alignment model-evaluation synthetic-data structured-output instruction-following model-speed cost-efficiency benchmarking safety mmitchell connor-leahy
Claude 3 from Anthropic launches in three sizes: Haiku (small, unreleased), Sonnet (medium, default on claude.ai, AWS, and GCP), and Opus (large, on Claude Pro). Opus outperforms GPT-4 on key benchmarks like GPQA, impressing benchmark authors. All models support multimodality with advanced vision capabilities, including converting a 2-hour video into a blog post. Claude 3 offers improved alignment, fewer refusals, and extended context length up to 1 million tokens with near-perfect recall. Haiku is noted for speed and cost-efficiency, processing dense research papers in under three seconds. The models excel at following complex instructions and producing structured outputs like JSON. Safety improvements reduce refusal rates, though some criticism remains from experts. Claude 3 is trained on synthetic data and shows strong domain-specific evaluation results in finance, medicine, and philosophy.
Mistral Large disappoints
mistral-large mistral-small mixtral-8x7b gpt-4-turbo dreamgen-opus-v1 mistral-ai openai hugging-face benchmarking model-merging fine-tuning reinforcement-learning model-training tokenization model-optimization ai-assisted-decompilation performance cost-efficiency deception roleplay deep-speed dpo timotheeee1 cogbuji plasmator jsarnecki maldevide spottyluck mrjackspade
Mistral announced Mistral Large, a new language model achieving 81.2% accuracy on MMLU, trailing GPT-4 Turbo by about 5 percentage points on benchmarks. The community reception has been mixed, with skepticism about open sourcing and claims that Mistral Small outperforms the open Mixtral 8x7B. Discussions in the TheBloke Discord highlighted performance and cost-efficiency comparisons between Mistral Large and GPT-4 Turbo, technical challenges with DeepSpeed and DPOTrainer for training, advances in AI deception for roleplay characters using DreamGen Opus V1, and complexities in model merging using linear interpolation and PEFT methods. Enthusiasm for AI-assisted decompilation was also expressed, emphasizing the use of open-source projects for training data.