All tags
Model: "deepseek-v3"
not much happened today
kernelllm-8b gpt-4o deepseek-v3 mistral-medium-3 qwen3 blip3-o xgen-small anisora stable-audio-open-small alphaevolve meta-ai-fair mistral-ai qwen deepseek salesforce bilibili stability-ai google benchmarking model-performance multilinguality hardware-optimization multimodality image-generation video-generation text-to-audio model-parallelism chain-of-thought instruction-following reasoning mitigation-strategies reach_vb lmarena_ai theadimeline adcock_brett jxmnop dair_ai omarsar0
Meta released KernelLLM 8B, outperforming GPT-4o and DeepSeek V3 on KernelBench-Triton Level 1. Mistral Medium 3 debuted strongly in multiple benchmarks. Qwen3 models introduced a unified framework with multilingual support. DeepSeek-V3 features hardware-aware co-design. BLIP3-o family released for multimodal tasks using diffusion transformers. Salesforce launched xGen-Small models excelling in long-context and math benchmarks. Bilibili released AniSORA for anime video generation. Stability AI open-sourced Stable Audio Open Small optimized for Arm devices. Google’s AlphaEvolve coding agent improved Strassen's algorithm for the first time since 1969. Research shows chain-of-thought reasoning can harm instruction-following ability, with mitigation strategies like classifier-selective reasoning being most effective, but reasoning techniques show high variance and limited generalization. "Chain-of-thought (CoT) reasoning can harm a model’s ability to follow instructions" and "Mitigation strategies such as few-shot in-context learning, self-reflection, self-selective reasoning, and classifier-selective reasoning can counteract reasoning-induced failures".
ChatGPT Codex, OpenAI's first cloud SWE agent
codex-1 openai-o3 codex-mini gemma-3 blip3-o qwen-2.5 marigold-iid deepseek-v3 lightlab gemini-2.0 lumina-next openai runway salesforce qwen deepseek google google-deepmind j1 software-engineering parallel-processing multimodality diffusion-models depth-estimation scaling-laws reinforcement-learning fine-tuning model-performance multi-turn-conversation reasoning audio-processing sama kevinweil omarsar0 iscienceluvr akhaliq osanseviero c_valenzuelab mervenoyann arankomatsuzaki jasonwei demishassabis philschmid swyx teortaxestex jaseweston
OpenAI launched Codex, a cloud-based software engineering agent powered by codex-1 (an optimized version of OpenAI o3) available in research preview for Pro, Enterprise, and Team ChatGPT users, featuring parallel task execution like refactoring and bug fixing. The Codex CLI was enhanced with quick sign-in and a new low-latency model, codex-mini. Gemma 3 is highlighted as the best open model runnable on a single GPU. Runway released the Gen-4 References API for style transfer in generation. Salesforce introduced BLIP3-o, a unified multimodal model family using diffusion transformers for CLIP image features. The Qwen 2.5 models (1.5B and 3B versions) were integrated into the PocketPal app with various chat templates. Marigold IID, a new state-of-the-art open-source depth estimation model, was released.
In research, DeepSeek shared insights on scaling and hardware for DeepSeek-V3. Google unveiled LightLab, a diffusion-based light source control in images. Google DeepMind's AlphaEvolve uses Gemini 2.0 to discover new math and reduce costs without reinforcement learning. Omni-R1 studied audio's role in fine-tuning audio LLMs. Qwen proposed a parallel scaling law inspired by classifier-free guidance. Salesforce released Lumina-Next on the Qwen base, outperforming Janus-Pro. A study found LLM performance degrades in multi-turn conversations due to unreliability. J1 is incentivizing LLM-as-a-Judge thinking via reinforcement learning. A new Qwen study correlates question and strategy similarity to predict reasoning strategies.
not much happened today
gemini-2.5-pro chatgpt deepseek-v3 qwen-2.5 claude-3.5-sonnet claude-3.7-sonnet google anthropic openai llama_index langchain runway deepseek math benchmarking chains-of-thought model-performance multi-agent-systems agent-frameworks media-generation long-horizon-planning code-generation rasbt danielhanchen hkproj
Gemini 2.5 Pro shows strengths and weaknesses, notably lacking LaTex math rendering unlike ChatGPT, and scored 24.4% on the 2025 US AMO. DeepSeek V3 ranks 8th and 12th on recent leaderboards. Qwen 2.5 models have been integrated into the PocketPal app. Research from Anthropic reveals that Chains-of-Thought (CoT) reasoning is often unfaithful, especially on harder tasks, raising safety concerns. OpenAI's PaperBench benchmark shows AI agents struggle with long-horizon planning, with Claude 3.5 Sonnet achieving only 21.0% accuracy. CodeAct framework generalizes ReAct for dynamic code writing by agents. LangChain explains multi-agent handoffs in LangGraph. Runway Gen-4 marks a new phase in media creation.
not much happened today
gpt-4o deepseek-v3 claude-3.7-sonnet o3-mini gemini-2.5-pro openai deepseek anthropic google-deepmind togethercompute hypertecgroup coreweave cursor-ai windsurf-ai coding instruction-following image-generation policy-compliance long-context audio-processing video-processing gpu-clusters ai-infrastructure api-access sama kevinweil joannejang nrehiew_ giffmana _philschmid scaling01 saranormous
GPT-4o was praised for its improved coding, instruction following, and freedom, becoming the leading non-reasoning coding model surpassing DeepSeek V3 and Claude 3.7 Sonnet in coding benchmarks, though it still lags behind reasoning models like o3-mini. Concerns about policy compliance in image generation were noted, with efforts to improve adherence. Gemini 2.5 Pro was highlighted for its advanced audio and video understanding, long context capabilities, and integration with platforms like Cursor AI and Windsurf AI. AI infrastructure developments include a partnership between Together AI and Hypertec Group to deliver large-scale GPU clusters, and CoreWeave's IPO was celebrated for advancing AI infrastructure. GPU and TPU usage is expected to increase significantly. "GPT-4o's transparency and background generation feature" and "Gemini 2.5 Pro scored above 50% on Simple-Bench AI Explanation" were key highlights.
not much happened today
gemini-2.0-flash-thinking command-a qwq-32b gemma-3-27b gemma-3 shieldgemma-2 llama-3-70b deepseek-r1 o1-mini deepseek-v3 google-deepmind cohere meta-ai-fair alibaba hugging-face model-updates model-performance benchmarking reinforcement-learning transformers normalization-layers image-generation vision memory-efficiency context-windows fine-tuning yann-lecun
Google DeepMind announced updates to Gemini 2.0, including an upgraded Flash Thinking model with stronger reasoning and native image generation capabilities. Cohere launched Command A, a 111B parameter dense model with a 256K context window and competitive pricing, available on Hugging Face. Meta AI proposed Dynamic Tanh (DyT) as a replacement for normalization layers in Transformers, supported by Yann LeCun. Alibaba released QwQ-32B, a 32.5B parameter model excelling in math and coding, fine-tuned with reinforcement learning and freely available under Apache 2.0 license. Google DeepMind also released Gemma 3 models ranging from 1B to 27B parameters with a 128K token context window and over 140 language support, plus ShieldGemma 2, an image safety checker. Benchmarking shows Gemma 3 27B has strong vision and memory efficiency but is outperformed by larger models like Llama 3.3 70B and DeepSeek V3 671B. The Hugging Face LLM leaderboard history was shared by @_lewtun.
Gemma 3 beats DeepSeek V3 in Elo, 2.0 Flash beats GPT4o with Native Image Gen
gemma-3 gemini-1.5-pro gemini-2 o1-preview o3-mini-high deepseek-v3 claude-3.7-sonnet qwen-2.5-max google-deepmind openai multimodality multilinguality context-window quantization image-generation model-benchmarking model-performance vision reach_vb _philschmid danielhanchen lmarena_ai osanseviero
Google DeepMind launched the Gemma 3 family of models featuring a 128k context window, multimodal input (image and video), and multilingual support for 140+ languages. The Gemma 3-27B model ranks among the top open models on LMArena benchmarks, outperforming several competitors and matching Gemini-1.5-Pro on benchmarks. Additionally, Gemini 2 introduced Flash Native Image Generation with advanced image editing capabilities, a feature teased by OpenAI but not launched. The updates highlight significant advances in context length, multimodality, and model efficiency via quantization.
not much happened today
gpt-4.5 gpt-4 gpt-4o o1 claude-3.5-sonnet claude-3.7 claude-3-opus deepseek-v3 grok-3 openai anthropic perplexity-ai deepseek scaling01 model-performance humor emotional-intelligence model-comparison pricing context-windows model-size user-experience andrej-karpathy jeremyphoward abacaj stevenheidel yuchenj_uw aravsrinivas dylan522p random_walker
GPT-4.5 sparked mixed reactions on Twitter, with @karpathy noting users preferred GPT-4 in a poll despite his personal favor for GPT-4.5's creativity and humor. Critics like @abacaj highlighted GPT-4.5's slowness and questioned its practical value and pricing compared to other models. Performance-wise, GPT-4.5 ranks above GPT-4o but below o1 and Claude 3.5 Sonnet, with Claude 3.7 outperforming it on many tasks yet GPT-4.5 praised for its humor and "vibes." Speculation about GPT-4.5's size suggests around 5 trillion parameters. Discussions also touched on pricing disparities, with Perplexity Deep Research at $20/month versus ChatGPT at $200/month. The emotional intelligence and humor of models like Claude 3.7 were also noted.
lots of small launches
gpt-4o claude-3.7-sonnet claude-3.7 claude-3.5-sonnet deepseek-r1 deepseek-v3 grok-3 openai anthropic amazon cloudflare perplexity-ai deepseek-ai togethercompute elevenlabs elicitorg inceptionailabs mistral-ai voice model-releases cuda gpu-optimization inference open-source api model-performance token-efficiency context-windows cuda jit-compilation lmarena_ai alexalbert__ aravsrinivas reach_vb
GPT-4o Advanced Voice Preview is now available for free ChatGPT users with enhanced daily limits for Plus and Pro users. Claude 3.7 Sonnet has achieved the top rank in WebDev Arena with improved token efficiency. DeepSeek-R1 with 671B parameters benefits from the Together Inference platform optimizing NVIDIA Blackwell GPU usage, alongside the open-source DeepGEMM CUDA library delivering up to 2.7x speedups on Hopper GPUs. Perplexity launched a new Voice Mode and a Deep Research API. The upcoming Grok 3 API will support a 1M token context window. Several companies including Elicit, Amazon, Anthropic, Cloudflare, FLORA, Elevenlabs, and Inception Labs announced new funding rounds, product launches, and model releases.
not much happened today
claude-3.7-sonnet claude-3.7 deepseek-r1 o3-mini deepseek-v3 gemini-2.0-pro gpt-4o qwen2.5-coder-32b-instruct anthropic perplexity-ai amazon google-cloud deepseek_ai coding reasoning model-benchmarking agentic-workflows context-window model-performance open-source moe model-training communication-libraries fp8 nvlink rdma cli-tools skirano omarsar0 reach_vb artificialanlys terryyuezhuo _akhaliq _philschmid catherineols goodside danielhanchen
Claude 3.7 Sonnet demonstrates exceptional coding and reasoning capabilities, outperforming models like DeepSeek R1, O3-mini, and GPT-4o on benchmarks such as SciCode and LiveCodeBench. It is available on platforms including Perplexity Pro, Anthropic, Amazon Bedrock, and Google Cloud, with pricing at $3/$15 per million tokens. Key features include a 64k token thinking mode, 200k context window, and the CLI-based coding assistant Claude Code. Meanwhile, DeepSeek released DeepEP, an open-source communication library optimized for MoE model training and inference with support for NVLink, RDMA, and FP8. These updates highlight advancements in coding AI and efficient model training infrastructure.
Mistral Small 3 24B and Tulu 3 405B
mistral-small-3 tulu-3-405b llama-3 tiny-swallow-1.5b qwen-2.5-max deepseek-v3 claude-3.5-sonnet gemini-1.5-pro gpt4o-mini llama-3-3-70b mistral-ai ai2 sakana-ai alibaba_qwen deepseek ollama llamaindex reinforcement-learning model-fine-tuning local-inference model-performance model-optimization on-device-ai instruction-following api training-data natural-language-processing clementdelangue dchaplot reach_vb
Mistral AI released Mistral Small 3, a 24B parameter model optimized for local inference with low latency and 81% accuracy on MMLU, competing with Llama 3.3 70B, Qwen-2.5 32B, and GPT4o-mini. AI2 released Tülu 3 405B, a large finetuned model of Llama 3 using Reinforcement Learning from Verifiable Rewards (RVLR), competitive with DeepSeek v3. Sakana AI launched TinySwallow-1.5B, a Japanese language model using TAID for on-device use. Alibaba_Qwen released Qwen 2.5 Max, trained on 20 trillion tokens, with performance comparable to DeepSeek V3, Claude 3.5 Sonnet, and Gemini 1.5 Pro, and updated API pricing. These releases highlight advances in open models, efficient inference, and reinforcement learning techniques.
not much happened today
deepseek-r1 deepseek-v3 coder-v2 prover deepseek hugging-face dell openai instruction-tuning performance-benchmarks model-deployment training-costs hardware-scalability ai-safety risk-mitigation ethical-ai open-source gpu-utilization yann-lecun yoshua-bengio francois-chollet giffman
DeepSeek-R1 and DeepSeek-V3 models have made significant advancements, trained on an instruction-tuning dataset of 1.5M samples with 600,000 reasoning and 200,000 non-reasoning SFT data. The models demonstrate strong performance benchmarks and are deployed on-premise via collaborations with Dell and Hugging Face. Training costs are estimated around $5.5M to $6M, with efficient hardware utilization on 8xH100 servers. The International AI Safety Report highlights risks such as malicious use, malfunctions, and systemic risks including AI-driven cyberattacks. Industry leaders like Yann LeCun and Yoshua Bengio provide insights on market reactions, AI safety, and ethical considerations, with emphasis on AI's role in creativity and economic incentives.
not much happened today
deepseek-r1 qwen-2.5 qwen-2.5-max deepseek-v3 deepseek-janus-pro gpt-4 nvidia anthropic openai deepseek huawei vercel bespoke-labs model-merging multimodality reinforcement-learning chain-of-thought gpu-optimization compute-infrastructure compression crypto-api image-generation saranormous zizhpan victormustar omarsar0 markchen90 sakanaailabs reach_vb madiator dain_mclau francoisfleuret garygodchaux arankomatsuzaki id_aa_carmack lavanyasant virattt
Huawei chips are highlighted in a diverse AI news roundup covering NVIDIA's stock rebound, new open music foundation models like Local Suno, and competitive AI models such as Qwen 2.5 Max and Deepseek V3. The release of DeepSeek Janus Pro, a multimodal LLM with image generation capabilities, and advancements in reinforcement learning and chain-of-thought reasoning are noted. Discussions include GPU rebranding with NVIDIA's H6400 GPUs, data center innovations, and enterprise AI applications like crypto APIs in hedge funds. "Deepseek R1's capabilities" and "Qwen 2.5 models added to applications" are key highlights.
DeepSeek #1 on US App Store, Nvidia stock tanks -17%
deepseek-r1 deepseek-v3 qwen2.5-vl o1 deepseek openai nvidia langchain moe-architecture chain-of-thought fp8-precision multimodality vision agentic-ai inference-scaling gpu-optimization model-efficiency ai-chatbots memory-integration tool-use stock-market-reactions sama mervenoyann omarasar0 teortaxestex nptacek carpeetti finbarrtimbers cwolferesearch arthurrapier danhendrycks scaling01 janusflow
DeepSeek has made a significant cultural impact by hitting mainstream news unexpectedly in 2025. The DeepSeek-R1 model features a massive 671B parameter MoE architecture and demonstrates chain-of-thought (CoT) capabilities comparable to OpenAI's o1 at a lower cost. The DeepSeek V3 model trains a 236B parameter model 42% faster than its predecessor using fp8 precision. The Qwen2.5 multimodal models support images and videos with sizes ranging from 3B to 72B parameters, featuring strong vision and agentic capabilities. LangChain and LangGraph integration enable AI chatbots with memory and tool use, including applications like the DeFi Agent. Discussions highlight NVIDIA's role in hardware acceleration, with concerns about stock drops due to DeepSeek's efficiency and market fears. The compute demand is expected to rise despite efficiency gains, driven by inference scaling and MoE design improvements.
DeepSeek R1: o1-level open weights model and a simple recipe for upgrading 1.5B models to Sonnet/4o level
deepseek-r1 deepseek-v3 qwen-2.5 llama-3.1 llama-3.3-70b deepseek ollama qwen llama reinforcement-learning fine-tuning model-distillation model-optimization reasoning reward-models multi-response-sampling model-training
DeepSeek released DeepSeek R1, a significant upgrade over DeepSeek V3 from just three weeks prior, featuring 8 models including full-size 671B MoE models and multiple distillations from Qwen 2.5 and Llama 3.1/3.3. The models are MIT licensed, allowing finetuning and distillation. Pricing is notably cheaper than o1 by 27x-50x. The training process used GRPO (reward for correctness and style outcomes) without relying on PRM, MCTS, or reward models, focusing on reasoning improvements through reinforcement learning. Distilled models can run on Ollama and show strong capabilities like writing Manim code. The release emphasizes advances in reinforcement-learning, fine-tuning, and model-distillation with a novel RL framework from DeepSeekMath.
not much happened today
deepseek-v3 llama-3-1-405b gpt-4o gpt-5 minimax-01 claude-3-haiku cosmos-nemotron-34b openai deep-learning-ai meta-ai-fair google-deepmind saama langchain nvidia mixture-of-experts coding math scaling visual-tokenizers diffusion-models inference-time-scaling retrieval-augmented-generation ai-export-restrictions security-vulnerabilities prompt-injection gpu-optimization fine-tuning personalized-medicine clinical-trials ai-agents persistent-memory akhaliq
DeepSeek-V3, a 671 billion parameter mixture-of-experts model, surpasses Llama 3.1 405B and GPT-4o in coding and math benchmarks. OpenAI announced the upcoming release of GPT-5 on April 27, 2023. MiniMax-01 Coder mode in ai-gradio enables building a chess game in one shot. Meta research highlights trade-offs in scaling visual tokenizers. Google DeepMind improves diffusion model quality via inference-time scaling. The RA-DIT method fine-tunes LLMs and retrievers for better RAG responses. The U.S. proposes a three-tier export restriction system on AI chips and models, excluding countries like China and Russia. Security vulnerabilities in AI chatbots involving CSRF and prompt injection were revealed. Concerns about superintelligence and weapons-grade AI models were expressed. ai-gradio updates include NVIDIA NIM compatibility and new models like cosmos-nemotron-34b. LangChain integrates with Claude-3-haiku for AI agents with persistent memory. Triton Warp specialization optimizes GPU usage for matrix multiplication. Meta's fine-tuned Llama models, OpenBioLLM-8B and OpenBioLLM-70B, target personalized medicine and clinical trials.
not much happened today
oute-tts-0.3-1b oute-tts-0.3-500m olm-1b qwen-2.5-0.5b hover gpt-4o deepseek-v3 harvey meta-ai-fair stability-ai alibaba deepseek hugging-face text-to-speech zero-shot-learning multilinguality emotion-control motor-control reinforcement-learning local-ai distributed-inference pipeline-parallelism mathematical-reasoning process-reward-models legal-ai education-ai ai-security humor reach_vb drjimfan vikhyatk mervenoyann aiatmeta iscienceluvr alibaba_qwen awnihannun ajeya_cotra emollick qtnx_ designerx
Harvey secured a new $300M funding round. OuteTTS 0.3 1B & 500M text-to-speech models were released featuring zero-shot voice cloning, multilingual support (en, jp, ko, zh, fr, de), and emotion control, powered by OLMo-1B and Qwen 2.5 0.5B. The HOVER model, a 1.5M-parameter neural net for agile motor control, was introduced, leveraging human motion capture datasets and massively parallel reinforcement learning. kokoro.js enables running AI models locally in browsers with minimal dependencies. Meta AI awarded $200K LLM evaluation grants for projects on regional language understanding, complex reasoning, and interactive programming environments. Stability AI's Twitter account was hacked, prompting security warnings. Alibaba Qwen improved Process Reward Models (PRMs) for better mathematical reasoning using a consensus filtering mechanism. DeepSeek V3 uses pipeline parallelism to enhance distributed inference and long-context generation efficiency. Discussions on AI policy in legal frameworks and AI's role in democratizing education were highlighted. Lighthearted AI-related humor was also shared.
PRIME: Process Reinforcement through Implicit Rewards
claude-3.5-sonnet gpt-4o deepseek-v3 gemini-2.0 openai together-ai deepseek langchain lucidrains reinforcement-learning scaling-laws model-performance agent-architecture software-development compute-scaling multi-expert-models sama aidan_mclau omarsar0 akhaliq hwchase17 tom_doerr lmarena_ai cwolferesearch richardmcngo
Implicit Process Reward Models (PRIME) have been highlighted as a significant advancement in online reinforcement learning, trained on a 7B model with impressive results compared to gpt-4o. The approach builds on the importance of process reward models established by "Let's Verify Step By Step." Additionally, AI Twitter discussions cover topics such as proto-AGI capabilities with claude-3.5-sonnet, the role of compute scaling for Artificial Superintelligence (ASI), and model performance nuances. New AI tools like Gemini 2.0 coder mode and LangGraph Studio enhance agent architecture and software development. Industry events include the LangChain AI Agent Conference and meetups fostering AI community connections. Company updates reveal OpenAI's financial challenges with Pro subscriptions and DeepSeek-V3's integration with Together AI APIs, showcasing efficient 671B MoE parameter models. Research discussions focus on scaling laws and compute efficiency in large language models.
not much happened to end the year
deepseek-v3 code-llm o1 sonnet-3.5 deepseek smol-ai reinforcement-learning reasoning training-data mixed-precision-training open-source multimodality software-development natural-language-processing interpretability developer-tools real-time-applications search sdk-generation corbtt tom_doerr cognitivecompai alexalbert__ theturingpost svpino bindureddy
Reinforcement Fine-Tuning (RFT) is introduced as a data-efficient method to improve reasoning in LLMs using minimal training data with strategies like First-Correct Solutions (FCS) and Greedily Diverse Solutions (GDS). DeepSeek-V3, a 671B parameter MoE language model trained on 14.8 trillion tokens with FP8 mixed precision training, highlights advances in large-scale models and open-source LLMs. Predictions for AI in 2025 include growth in smaller models, multimodality, and challenges in open-source AI. The impact of AI on software development jobs suggests a need for higher intelligence and specialization as AI automates low-skilled tasks. Enhancements to CodeLLM improve coding assistance with features like in-place editing and streaming responses. Natural Language Reinforcement Learning (NLRL) offers better interpretability and richer feedback for AI planning and critique. AI hiring is growing rapidly with startups seeking strong engineers in ML and systems. New AI-powered tools such as Rivet, Buzee, and Konfig improve real-time applications, search, and SDK generation using technologies like Rust and V8 isolates.
not much happened today
deepseek-v3 chatgpt-4 openai deepseek google qwen overfitting reasoning misguided-attention model-evaluation model-architecture finetuning open-source sam-altman
Sam Altman publicly criticizes DeepSeek and Qwen models, sparking debate about OpenAI's innovation claims and reliance on foundational research like the Transformer architecture. Deepseek V3 shows significant overfitting issues in the Misguided Attention evaluation, solving only 22% of test prompts, raising concerns about its reasoning and finetuning. Despite skepticism about its open-source status, Deepseek V3 is claimed to surpass ChatGPT4 as an open-source model, marking a milestone 1.75 years after ChatGPT4's release on March 14, 2023. The discussions highlight competitive dynamics in AI model performance and innovation sustainability.
not much happened today
vllm deepseek-v3 llamaindex openai deepseek qdrant twilio llamaindex elevenlabs training-efficiency parallelism cpu-offloading gradient-descent mixture-of-experts fp8-precision memory-optimization ai-voice-assistants coding-assistants document-processing version-control learning-rate-schedules federated-learning agentic-systems multi-agent-systems deliberative-alignment chain-of-thought on-device-ai multimodality francois-fleuret daniel-hanchen aaron-defazio fchollet elad-gil wojciech-zaremba richard-socher
ChatGPT, Sora, and the OpenAI API experienced a >5 hour outage but are now restored. Updates to vLLM enable DeepSeek-V3 to run with enhanced parallelism and CPU offloading, improving model deployment flexibility. Discussions on gradient descent in top-k routing MoE and adoption of FP8 precision focus on training efficiency and memory optimization. AIDE, an AI voice medical assistant by Team Therasync, leverages Qdrant, OpenAI, and Twilio. DeepSeek-Engineer offers AI-powered coding assistance with structured outputs. LlamaIndex integrates LlamaCloud and ElevenLabs for large-scale document processing and voice interaction. Insights on version control with ghstack and advocacy for linear decay learning rate schedules highlight best practices in AI development. Experts predict smaller, tighter models, true multimodal models, and on-device AI in 2025. Proposals for planetary-scale federated learning and community AGI moonshots emphasize future AI directions. Discussions on agentic systems, multi-agent workflows, and deliberative alignment through chain of thought reasoning underscore AI safety and alignment efforts.
DeepSeek v3: 671B finegrained MoE trained for $5.5m USD of compute on 15T tokens
deepseek-v3 gpt-4o claude-3.5-sonnet llama-3 deepseek-ai hugging-face openai anthropic mixture-of-experts model-training model-optimization reinforcement-learning chain-of-thought multi-token-prediction synthetic-data model-distillation fine-tuning attention-mechanisms gpu-optimization nrehiew_ denny_zhou
DeepSeek-V3 has launched with 671B MoE parameters and trained on 14.8T tokens, outperforming GPT-4o and Claude-3.5-sonnet in benchmarks. It was trained with only 2.788M H800 GPU hours, significantly less than Llama-3's 30.8M GPU-hours, showcasing major compute efficiency and cost reduction. The model is open-source and deployed via Hugging Face with API support. Innovations include native FP8 mixed precision training, Multi-Head Latent Attention scaling, distillation from synthetic reasoning data, pruning and healing for MoEs with up to 256 experts, and a new multi-token prediction objective enabling lookahead token planning. Research highlights also cover the OREO method and Natural Language Reinforcement Learning (NLRL) for multi-step reasoning and agent control.