All tags
Company: "perplexity-ai"
Reasoning Price War 2: Mistral Magistral + o3's 80% price cut + o3-pro
o3 o3-pro gpt-4.1 claude-4-sonnet gemini-2.5-pro magistral-small magistral-medium mistral-small-3.1 openai anthropic google-deepmind mistral-ai perplexity-ai reasoning token-efficiency price-cut benchmarking open-source model-releases context-windows gpu-optimization swyx sama scaling01 polynoamial nrehiew_ kevinweil gdb flavioad stevenheidel aravsrinivas
OpenAI announced an 80% price cut for its o3 model, making it competitively priced with GPT-4.1 and rivaling Anthropic's Claude 4 Sonnet and Google's Gemini 2.5 Pro. Alongside, o3-pro was released as a more powerful and reliable variant, though early benchmarks showed mixed performance relative to cost. Mistral AI launched its Magistral reasoning models, including an open-source 24B parameter version optimized for efficient deployment on consumer GPUs. The price reduction and new model releases signal intensified competition in reasoning-focused large language models, with notable improvements in token efficiency and cost-effectiveness.
not much happened today
codex claude-4-opus claude-4-sonnet gemini-2.5-pro gemini-2.5 qwen-2.5-vl qwen-3 playdiffusion openai anthropic google perplexity-ai bing playai suno hugging-face langchain-ai qwen mlx assemblyai llamacloud fine-tuning model-benchmarking text-to-video agentic-ai retrieval-augmented-generation open-source-models speech-editing audio-processing text-to-speech ultra-low-latency multimodality public-notebooks sama gdb kevinweil lmarena_ai epochairesearch reach_vb wightmanr deeplearningai mervenoyann awnihannun jordirib1 aravsrinivas omarsar0 lioronai jerryjliu0 nerdai tonywu_71 _akhaliq clementdelangue _mfelfel
OpenAI rolled out Codex to ChatGPT Plus users with internet access and fine-grained controls, improving memory features for free users. Anthropic's Claude 4 Opus and Sonnet models lead coding benchmarks, while Google's Gemini 2.5 Pro and Flash models gain recognition with new audio capabilities. Qwen 2.5-VL and Qwen 3 quantizations are noted for versatility and support. Bing Video Creator launched globally enabling text-to-video generation, and Perplexity Labs sees increased demand for travel search. New agentic AI tools and RAG innovations include LlamaCloud and FedRAG. Open-source releases include Holo-1 for web navigation and PlayAI's PlayDiffusion for speech editing. Audio and multimodal advances feature Suno's music editing upgrades, Google's native TTS in 24+ languages, and Universal Streaming's ultra-low latency speech-to-text. Google NotebookLM now supports public notebooks. "Codex's internet access brings tradeoffs, with explicit warnings about risk" and "Gemini 2.5 Pro is cited as a daily driver by users".
Cognition's DeepWiki, a free encyclopedia of all GitHub repos
o4-mini perception-encoder qwen-2.5-vl dia-1.6b grok-3 gemini-2.5-pro claude-3.7 gpt-4.1 cognition meta-ai-fair alibaba hugging-face openai perplexity-ai vllm vision text-to-speech reinforcement-learning ocr model-releases model-integration open-source frameworks chatbots model-selector silas-alberti mervenoyann reach_vb aravsrinivas vikparuchuri lioronai
Silas Alberti of Cognition announced DeepWiki, a free encyclopedia of all GitHub repos providing Wikipedia-like descriptions and Devin-backed chatbots for public repos. Meta released Perception Encoders (PE) with A2.0 license, outperforming InternVL3 and Qwen2.5VL on vision tasks. Alibaba launched the Qwen Chat App for iOS and Android. Hugging Face integrated the Dia 1.6B SoTA text-to-speech model via FAL. OpenAI expanded deep research usage with a lightweight version powered by o4-mini model, now available to free users. Perplexity AI updated their model selector with Grok 3 Beta, o4-mini, and support for models like gemini 2.5 pro, claude 3.7, and gpt-4.1. vLLM project introduced OpenRLHF framework for reinforcement learning with human feedback. Surya OCR alpha model supports 90+ languages and LaTeX. MegaParse open-source library was introduced for LLM-ready data formats.
GPT 4.1: The New OpenAI Workhorse
gpt-4.1 gpt-4.1-mini gpt-4.1-nano gpt-4o gemini-2.5-pro openai llama-index perplexity-ai google-deepmind coding instruction-following long-context benchmarks model-pricing model-integration model-deprecation sama kevinweil omarsar0 aidan_mclau danhendrycks polynoamial scaling01 aravsrinivas lmarena_ai
OpenAI released GPT-4.1, including GPT-4.1 mini and GPT-4.1 nano, highlighting improvements in coding, instruction following, and handling long contexts up to 1 million tokens. The model achieves a 54 score on SWE-bench verified and shows a 60% improvement over GPT-4o on internal benchmarks. Pricing for GPT-4.1 nano is notably low at $0.10/1M input and $0.40/1M output. GPT-4.5 Preview is being deprecated in favor of GPT-4.1. Integration support includes Llama Index with day 0 support. Some negative feedback was noted for GPT-4.1 nano. Additionally, Perplexity's Sonar API ties with Gemini-2.5 Pro for the top spot in the LM Search Arena leaderboard. New benchmarks like MRCR and GraphWalks were introduced alongside updated prompting guides and cookbooks.
not much happened today
gemini-2.0-flash imagen-3 mistral-small-3.1 mistral-3 gpt-4o-mini claude-3.5-haiku olm0-32b qwen-2.5 shieldgemma-2 julian fasttransform nvidia google mistral-ai allen-ai anthropic langchainai perplexity-ai kalshi stripe qodoai multimodality image-generation context-windows model-pricing open-source-models image-classification frameworks python-libraries partnerships jeremyphoward karpathy abacaj mervenoyann
At Nvidia GTC Day 1, several AI updates were highlighted: Google's Gemini 2.0 Flash introduces image input/output but is not recommended for text-to-image tasks, with Imagen 3 preferred for that. Mistral AI released Mistral Small 3.1 with 128k token context window and competitive pricing. Allen AI launched OLMo-32B, an open LLM outperforming GPT-4o mini and Qwen 2.5. ShieldGemma 2 was introduced for image safety classification. LangChainAI announced multiple updates including Julian powered by LangGraph and integration with AnthropicAI's MCP. Jeremy Howard released fasttransform, a Python library for data transformations. Perplexity AI partnered with Kalshi for NCAA March Madness predictions.
Anthropic's $61.5B Series E
gpt-4.5 claude-3.7-sonnet deepseek-r1 anthropic openai deepseek lmsys perplexity-ai deutsche-telekom model-performance benchmarking style-control coding multi-turn funding partnerships workflow lmarena_ai teortaxestex casper_hansen_ omarsar0 aidan_mclau willdepue vikhyatk teknim1 reach_vb _aidan_clark_ cto_junior aravsrinivas
Anthropic raised a $3.5 billion Series E funding round at a $61.5 billion valuation, signaling strong financial backing for the Claude AI model. GPT-4.5 achieved #1 rank across all categories on the LMArena leaderboard, excelling in multi-turn conversations, coding, math, creative writing, and style control. DeepSeek R1 tied with GPT-4.5 for top performance on hard prompts with style control. Discussions highlighted comparisons between GPT-4.5 and Claude 3.7 Sonnet in coding and workflow applications. The importance of the LMSYS benchmark was emphasized, though some questioned the relevance of benchmarks versus user acquisition. Additionally, Perplexity AI partnered with Deutsche Telekom to integrate the Perplexity Assistant into a new AI phone.
not much happened today
gpt-4.5 gpt-4 gpt-4o o1 claude-3.5-sonnet claude-3.7 claude-3-opus deepseek-v3 grok-3 openai anthropic perplexity-ai deepseek scaling01 model-performance humor emotional-intelligence model-comparison pricing context-windows model-size user-experience andrej-karpathy jeremyphoward abacaj stevenheidel yuchenj_uw aravsrinivas dylan522p random_walker
GPT-4.5 sparked mixed reactions on Twitter, with @karpathy noting users preferred GPT-4 in a poll despite his personal favor for GPT-4.5's creativity and humor. Critics like @abacaj highlighted GPT-4.5's slowness and questioned its practical value and pricing compared to other models. Performance-wise, GPT-4.5 ranks above GPT-4o but below o1 and Claude 3.5 Sonnet, with Claude 3.7 outperforming it on many tasks yet GPT-4.5 praised for its humor and "vibes." Speculation about GPT-4.5's size suggests around 5 trillion parameters. Discussions also touched on pricing disparities, with Perplexity Deep Research at $20/month versus ChatGPT at $200/month. The emotional intelligence and humor of models like Claude 3.7 were also noted.
lots of small launches
gpt-4o claude-3.7-sonnet claude-3.7 claude-3.5-sonnet deepseek-r1 deepseek-v3 grok-3 openai anthropic amazon cloudflare perplexity-ai deepseek-ai togethercompute elevenlabs elicitorg inceptionailabs mistral-ai voice model-releases cuda gpu-optimization inference open-source api model-performance token-efficiency context-windows cuda jit-compilation lmarena_ai alexalbert__ aravsrinivas reach_vb
GPT-4o Advanced Voice Preview is now available for free ChatGPT users with enhanced daily limits for Plus and Pro users. Claude 3.7 Sonnet has achieved the top rank in WebDev Arena with improved token efficiency. DeepSeek-R1 with 671B parameters benefits from the Together Inference platform optimizing NVIDIA Blackwell GPU usage, alongside the open-source DeepGEMM CUDA library delivering up to 2.7x speedups on Hopper GPUs. Perplexity launched a new Voice Mode and a Deep Research API. The upcoming Grok 3 API will support a 1M token context window. Several companies including Elicit, Amazon, Anthropic, Cloudflare, FLORA, Elevenlabs, and Inception Labs announced new funding rounds, product launches, and model releases.
not much happened today
claude-3.7-sonnet claude-3.7 deepseek-r1 o3-mini deepseek-v3 gemini-2.0-pro gpt-4o qwen2.5-coder-32b-instruct anthropic perplexity-ai amazon google-cloud deepseek_ai coding reasoning model-benchmarking agentic-workflows context-window model-performance open-source moe model-training communication-libraries fp8 nvlink rdma cli-tools skirano omarsar0 reach_vb artificialanlys terryyuezhuo _akhaliq _philschmid catherineols goodside danielhanchen
Claude 3.7 Sonnet demonstrates exceptional coding and reasoning capabilities, outperforming models like DeepSeek R1, O3-mini, and GPT-4o on benchmarks such as SciCode and LiveCodeBench. It is available on platforms including Perplexity Pro, Anthropic, Amazon Bedrock, and Google Cloud, with pricing at $3/$15 per million tokens. Key features include a 64k token thinking mode, 200k context window, and the CLI-based coding assistant Claude Code. Meanwhile, DeepSeek released DeepEP, an open-source communication library optimized for MoE model training and inference with support for NVLink, RDMA, and FP8. These updates highlight advancements in coding AI and efficient model training infrastructure.
The Ultra-Scale Playbook: Training LLMs on GPU Clusters
deepseek-native-sparse-attention r1-1776 paligemma-2-mix muse baichuan-m1-14b stripedhyena-2 huggingface deepseek perplexity-ai google-deepmind microsoft baichuan stripedhyena gpu-training scaling multimodality vision model-training foundation-models medical-llm genome-modeling robotic-manipulation interactive-content eliebakouch nouamanetazi lvwerra thom-wolf proftomyeh alex-wang aravsrinivas _akhaliq _philschmid mervenoyann reach_vb arankomatsuzaki maximelabonne
Huggingface released "The Ultra-Scale Playbook: Training LLMs on GPU Clusters," an interactive blogpost based on 4000 scaling experiments on up to 512 GPUs, providing detailed insights into modern GPU training strategies. DeepSeek introduced the Native Sparse Attention (NSA) model, gaining significant community attention, while Perplexity AI launched R1-1776, an uncensored and unbiased version of DeepSeek's R1 model. Google DeepMind unveiled PaliGemma 2 Mix, a multi-task vision-language model available in 3B, 10B, and 28B sizes. Microsoft introduced Muse, a generative AI model trained on the game Bleeding Edge, and presented Magma, a foundation model for multimodal AI agents excelling in UI navigation and robotic manipulation. Baichuan-M1-14B was announced as a state-of-the-art medical LLM trained on 20T tokens, and a fully open-source 40B genome modeling model using StripedHyena 2 architecture was also released. "Making your own gaming experience is coming sooner than you'd think," noted in relation to Muse.
not much happened today
chatgpt-4o deepseek-r1 o3 o3-mini gemini-2-flash qwen-2.5 qwen-0.5b hugging-face openai perplexity-ai deepseek-ai gemini qwen metr_evals reasoning benchmarking model-performance prompt-engineering model-optimization model-deployment small-language-models mobile-ai ai-agents speed-optimization _akhaliq aravsrinivas lmarena_ai omarsar0 risingsayak
Smolagents library by Huggingface continues trending. ChatGPT-4o latest version
chatgpt-40-latest-20250129
released. DeepSeek R1 671B sets speed record at 198 t/s, fastest reasoning model, recommended with specific prompt settings. Perplexity Deep Research outperforms models like Gemini Thinking, o3-mini, and DeepSeek-R1 on Humanity's Last Exam benchmark with 21.1% score and 93.9% accuracy on SimpleQA. ChatGPT-4o ranks #1 on Arena leaderboard in multiple categories except math. OpenAI's o3 model powers Deep Research tool for ChatGPT Pro users. Gemini 2 Flash and Qwen 2.5 models support LLMGrading verifier. Qwen 2.5 models added to PocketPal app. MLX shows small LLMs like Qwen 0.5B generate tokens at high speed on M4 Max and iPhone 16 Pro. Gemini Flash 2.0 leads new AI agent leaderboard. DeepSeek R1 is most liked on Hugging Face with over 10 million downloads. not much happened today
zonos-v0.1 audiobox-aesthetics moshi sonar llama-3-70b gpt-4o-mini claude-3.5-haiku gpt-4o claude-3.5-sonnet deepseek-r1-distilled-qwen-1.5b reasonflux-32b o1-preview zyphra-ai meta-ai-fair kyutai-labs perplexity-ai cerebras uc-berkeley brilliant-labs google-deepmind text-to-speech speech-to-speech benchmarking model-performance reinforcement-learning math real-time-processing open-source cross-platform-integration multilinguality zero-shot-learning danhendrycks
Zyphra AI launched Zonos-v0.1, a leading open-weight text-to-speech model supporting multiple languages and zero-shot voice cloning. Meta FAIR released the open-source Audiobox Aesthetics model trained on 562 hours of audio data. Kyutai Labs introduced Moshi, a real-time speech-to-speech system with low latency. Perplexity AI announced the Sonar model based on Llama 3.3 70b, outperforming top models like GPT-4o and Claude 3.5 Sonnet with 1200 tokens/second speed, powered by Cerebras infrastructure. UC Berkeley open-sourced a 1.5B model trained with reinforcement learning that beats o1-preview on math tasks. ReasonFlux-32B achieved 91.2% on the MATH benchmark, outperforming OpenAI o1-preview. CrossPoster, an AI agent for cross-platform posting, was released using LlamaIndex workflows. Brilliant Labs integrated the Google DeepMind Gemini Live API into smart glasses for real-time translation and object identification.
OpenAI launches Operator, its first Agent
operator deepseek-r1 videollama-3 llama-4 o1 claude openai anthropic deepseek-ai google-deepmind perplexity-ai computer-using-agent reasoning multimodality performance-benchmarks open-source ai-safety benchmarking video-generation model-evaluation sam-altman swyx
OpenAI launched Operator, a premium computer-using agent for web tasks like booking and ordering, available now for Pro users in the US with an API promised. It features long horizon remote VMs up to 20 minutes and video export, showing state-of-the-art agent performance but not yet human-level. Anthropic had launched a similar agent 3 months earlier as an open source demo. DeepSeek AI unveiled DeepSeek R1, an open-source reasoning model excelling on the Humanity's Last Exam dataset, outperforming models like LLaMA 4 and OpenAI's o1. Google DeepMind open-sourced VideoLLaMA 3, a multimodal foundation model for image and video understanding. Perplexity AI released Perplexity Assistant for Android with reasoning and search capabilities. The Humanity's Last Exam dataset contains 3,000 questions testing AI reasoning, with current models scoring below 10% accuracy, indicating room for improvement. OpenAI's Computer-Using Agent (CUA) shows improved performance on OSWorld and WebArena benchmarks but still lags behind humans. Anthropic AI introduced Citations for safer AI responses. Sam Altman and Swyx commented on Operator's launch and capabilities.
Perplexity starts Shopping for you
pixtral-large-124b llama-3.1-405b claude-3.6 claude-3.5 stripe perplexity-ai mistral-ai hugging-face cerebras anthropic weights-biases google vllm-project multi-modal image-generation inference context-windows model-performance model-efficiency sdk ai-integration one-click-checkout memory-optimization patrick-collison jeff-weinstein mervenoyann sophiamyang tim-dettmers omarsar0 akhaliq aravsrinivas
Stripe launched their Agent SDK, enabling AI-native shopping experiences like Perplexity Shopping for US Pro members, featuring one-click checkout and free shipping via the Perplexity Merchant Program. Mistral AI released the Pixtral Large 124B multi-modal image model, now on Hugging Face and supported by Le Chat for image generation. Cerebras Systems offers a public inference endpoint for Llama 3.1 405B with a 128k context window and high throughput. Claude 3.6 shows improvements over Claude 3.5 but with subtle hallucinations. The Bi-Mamba 1-bit architecture improves LLM efficiency. The wandb SDK is preinstalled on Google Colab, and Pixtral Large is integrated into AnyChat and supported by vLLM for efficient model usage.
Not much happened today
grok-beta llama-3-1-70b claude-3-5-haiku claude-3-opus llama-3 chatgpt gemini meta-ai-fair scale-ai anthropic perplexity-ai langchainai weights-biases qwen pricing national-security defense open-source agentic-ai retrieval-augmented-generation election-predictions real-time-updates annotation ai-ecosystem memes humor alexandr_wang svpino aravsrinivas bindureddy teortaxestex jessechenglyu junyang-lin cte_junior jerryjliu0
Grok Beta surpasses Llama 3.1 70B in intelligence but is less competitive due to its pricing at $5/1M input tokens and $15/1M output tokens. Defense Llama, developed with Meta AI and Scale AI, targets American national security applications. SWE-Kit, an open-source framework, supports building customizable AI software engineers compatible with Llama 3, ChatGPT, and Claude. LangChainAI and Weights & Biases integrate to improve retrievers and reduce hallucinations in RAG applications using Gemini. Perplexity AI offers enhanced election tracking tools for the 2024 elections, including live state results and support for Claude 3.5 Haiku. AI Talk launched featuring discussions on Chinese AI labs with guests from Qwen. Memes highlight Elon Musk and humorous AI coding mishaps.
not much happened today
smollm2 llama-3-2 stable-diffusion-3.5 claude-3.5-sonnet gemini openai anthropic google meta-ai-fair suno-ai perplexity-ai on-device-ai model-performance robotics multimodality ai-regulation model-releases natural-language-processing prompt-engineering agentic-ai ai-application model-optimization sam-altman akhaliq arav-srinivas labenz loubnabenallal1 alexalbert fchollet stasbekman svpino rohanpaul_ai hamelhusain
ChatGPT Search was launched by Sam Altman, who called it his favorite feature since ChatGPT's original launch, doubling his usage. Comparisons were made between ChatGPT Search and Perplexity with improvements noted in Perplexity's web navigation. Google introduced a "Grounding" feature in the Gemini API & AI Studio enabling Gemini models to access real-time web information. Despite Gemini's leaderboard performance, developer adoption lags behind OpenAI and Anthropic. SmolLM2, a new small, powerful on-device language model, outperforms Meta's Llama 3.2 1B. A Claude desktop app was released for Mac and Windows. Meta AI announced robotics advancements including Meta Sparsh, Meta Digit 360, and Meta Digit Plexus. Stable Diffusion 3.5 Medium, a 2B parameter model with a permissive license, was released. Insights on AGI development suggest initial inferiority but rapid improvement. Anthropic advocates for early targeted AI regulation. Discussions on ML specialization predict training will concentrate among few companies, while inference becomes commoditized. New AI tools include Suno AI Personas for music creation, PromptQL for natural language querying over data, and Agent S for desktop task automation. Humor was shared about Python environment upgrades.
The AI Search Wars Have Begun — SearchGPT, Gemini Grounding, and more
gpt-4o o1-preview claude-3.5-sonnet universal-2 openai google gemini nyt perplexity-ai glean nvidia langchain langgraph weights-biases cohere weaviate fine-tuning synthetic-data distillation hallucinations benchmarking speech-to-text robotics neural-networks ai-agents sam-altman alexalbert__ _jasonwei svpino drjimfan virattt
ChatGPT launched its search functionality across all platforms using a fine-tuned version of GPT-4o with synthetic data generation and distillation from o1-preview. This feature includes a Chrome extension promoted by Sam Altman but has issues with hallucinations. The launch coincides with Gemini introducing Search Grounding after delays. Notably, The New York Times is not a partner due to a lawsuit against OpenAI. The AI search competition intensifies with consumer and B2B players like Perplexity and Glean. Additionally, Claude 3.5 Sonnet achieved a new benchmark record on SWE-bench Verified, and a new hallucination evaluation benchmark, SimpleQA, was introduced. Other highlights include the Universal-2 speech-to-text model with 660M parameters and HOVER, a neural whole-body controller for humanoid robots trained in NVIDIA Isaac simulation. AI hedge fund teams using LangChain and LangGraph were also showcased. The news is sponsored by the RAG++ course featuring experts from Weights & Biases, Cohere, and Weaviate.
Creating a LLM-as-a-Judge
claude-3.5-sonnet claude-3.5 notebooklm simpleqa recraft-v3 anthropic openai deepmind apple zep perplexity-ai github critique-shadowing llm-judging domain-experts dataset-creation prompt-engineering error-analysis temporal-knowledge-graphs memory-layer ai-agent-memory hallucination-reduction integration hamel-husain swyx
Anthropic released details on Claude 3.5 SWEBench+SWEAgent, while OpenAI introduced SimpleQA and DeepMind launched NotebookLM. Apple announced new M4 Macbooks, and a new SOTA image model, Recraft v3, emerged. Hamel Husain presented a detailed 6,000-word treatise on creating LLM judges using a method called critique shadowing to align LLMs with domain experts, addressing the problem of untrusted and unused data in AI teams. The workflow involves expert-reviewed datasets and iterative prompt refinement. Additionally, Zep introduced a temporal knowledge graph memory layer to improve AI agent memory and reduce hallucinations. Anthropic also integrated Claude 3.5 Sonnet with GitHub Copilot, expanding access to Copilot Chat users.
not much happened today
llama-3.1-nemotron-70b golden-gate-claude embed-3 liquid-ai anthropic cohere openai meta-ai-fair nvidia perplexity-ai langchain kestra ostrisai llamaindex feature-steering social-bias multimodality model-optimization workflow-orchestration inference-speed event-driven-workflows knowledge-backed-agents economic-impact ai-national-security trust-dynamics sam-altman lmarena_ai aravsrinivas svpino richardmcngo ajeya_cotra tamaybes danhendrycks jerryjliu0
Liquid AI held a launch event introducing new foundation models. Anthropic shared follow-up research on social bias and feature steering with their "Golden Gate Claude" feature. Cohere released multimodal Embed 3 embeddings models following Aya Expanse. There was misinformation about GPT-5/Orion debunked by Sam Altman. Meta AI FAIR announced Open Materials 2024 with new models and datasets for inorganic materials discovery using the EquiformerV2 architecture. Anthropic AI demonstrated feature steering to balance social bias and model capabilities. NVIDIA's Llama-3.1-Nemotron-70B ranked highly on the Arena leaderboard with style control. Perplexity AI expanded to 100M weekly queries with new finance and reasoning modes. LangChain emphasized real application integration with interactive frame interpolation. Kestra highlighted scalable event-driven workflows with open-source YAML-based orchestration. OpenFLUX optimized inference speed by doubling it through guidance LoRA training. Discussions on AI safety included trust dynamics between humans and AI, economic impacts of AI automation, and the White House AI National Security memo addressing cyber and biological risks. LlamaIndex showcased knowledge-backed agents for enhanced AI applications.
DeepSeek Janus and Meta SpiRit-LM: Decoupled Image and Expressive Voice Omnimodality
nemotron-70b claude claude-3.5-sonnet gpt-4o deepseek meta-ai-fair wandb nvidia anthropic hugging-face perplexity-ai multimodality image-generation speech-synthesis fine-tuning model-merging benchmarking open-source model-optimization reinforcement-learning bindureddy aravsrinivas danielhanchen clementdelangue cwolferesearch
DeepSeek Janus and Meta SpiRit-LM are two notable multimodality AI models recently released, showcasing advances in image generation and speech synthesis respectively. DeepSeek Janus separates vision encoders for image understanding and generation, achieving better results in both tasks. Meta's SpiRit-LM introduces an expressive speech and writing model generating pitch and style units, improving over standard TTS. Additionally, W&B Weave offers comprehensive LLM observability and multimodality fine-tuning tools. Industry updates include Nvidia's Nemotron 70b model underperforming, Meta open-sourcing Movie Gen Bench for media generation benchmarking, Perplexity launching internal search with multi-step reasoning, and Anthropic updating Claude apps. Open source progress includes Hugging Face's gradient accumulation fix in transformers and advocacy for open source AI to prevent Big Tech dominance. "Model merging for combining skills of multiple models" is also highlighted.
Not much technical happened today
whisper-v3-turbo llama-3 llamaindex openai poolside liquidai perplexity-ai meta-ai-fair cohere fujitsu mixture-of-experts context-windows model-optimization fine-tuning quantization model-training alignment synthetic-data model-architecture agentic-ai nick-turley arav-srinivas francois-fleuret finbarr-timbers lewtun francois-chollet jerry-j-liu mmitchell-ai jxnlco
OpenAI announced raising $6.6B in new funding at a $157B valuation, with ChatGPT reaching 250M weekly active users. Poolside raised $500M to advance AGI development. LiquidAI introduced three new MoE models (1B, 3B, 40B) with a 32k context window and efficient token handling. OpenAI released Whisper V3 Turbo, an open-source multilingual model with significant speed improvements. Meta AI FAIR is hiring research interns focusing on LLM reasoning, alignment, synthetic data, and novel architectures. Cohere partnered with Fujitsu to launch Takane, a custom Japanese model. Technical discussions included challenges in LoRA fine-tuning, float8 quantization in Keras, and new tools like create-llama for agent templates. Industry commentary raised concerns about AI development priorities and highlighted freelancing opportunities in AI.
not much happened today
o1-preview o1-mini qwen-2.5 gpt-4o deepseek-v2.5 gpt-4-turbo-2024-04-09 grin llama-3-1-405b veo kat openai qwen deepseek-ai microsoft kyutai-labs perplexity-ai together-ai meta-ai-fair google-deepmind hugging-face google anthropic benchmarking math coding instruction-following model-merging model-expressiveness moe voice voice-models generative-video competition open-source model-deployment ai-agents hyung-won-chung noam-brown bindureddy akhaliq karpathy aravsrinivas fchollet cwolferesearch philschmid labenz ylecun
OpenAI's o1-preview and o1-mini models lead benchmarks in Math, Hard Prompts, and Coding. Qwen 2.5 72B model shows strong performance close to GPT-4o. DeepSeek-V2.5 tops Chinese LLMs, rivaling GPT-4-Turbo-2024-04-09. Microsoft's GRIN MoE achieves good results with 6.6B active parameters. Moshi voice model from Kyutai Labs runs locally on Apple Silicon Macs. Perplexity app introduces voice mode with push-to-talk. LlamaCoder by Together.ai uses Llama 3.1 405B for app generation. Google DeepMind's Veo is a new generative video model for YouTube Shorts. The 2024 ARC-AGI competition increases prize money and plans a university tour. A survey on model merging covers 50+ papers for LLM alignment. The Kolmogorov–Arnold Transformer (KAT) paper proposes replacing MLP layers with KAN layers for better expressiveness. Hugging Face Hub integrates with Google Cloud Vertex AI Model Garden for easier open-source model deployment. Agent.ai is introduced as a professional network for AI agents. "Touching grass is all you need."
$1150m for SSI, Sakana, You.com + Claude 500m context
olmo llama2-13b-chat claude claude-3.5-sonnet safe-superintelligence sakana-ai you-com perplexity-ai anthropic ai2 mixture-of-experts model-architecture model-training gpu-costs retrieval-augmented-generation video-generation ai-alignment enterprise-ai agentic-ai command-and-control ilya-sutskever mervenoyann yuchenj_uw rohanpaul_ai ctojunior omarsar0
Safe Superintelligence raised $1 billion at a $5 billion valuation, focusing on safety and search approaches as hinted by Ilya Sutskever. Sakana AI secured a $100 million Series A funding round, emphasizing nature-inspired collective intelligence. You.com pivoted to a ChatGPT-like productivity agent after a $50 million Series B round, while Perplexity AI raised over $250 million this summer. Anthropic launched Claude for Enterprise with a 500 million token context window. AI2 released a 64-expert Mixture-of-Experts (MoE) model called OLMo, outperforming Llama2-13B-Chat. Key AI research trends include efficient MoE architectures, challenges in AI alignment and GPU costs, and emerging AI agents for autonomous tasks. Innovations in AI development feature command and control for video generation, Retrieval-Augmented Generation (RAG) efficiency, and GitHub integration under Anthropic's Enterprise plan. "Our logo is meant to invoke the idea of a school of fish coming together and forming a coherent entity from simple rules as we want to make use of ideas from nature such as evolution and collective intelligence in our research."
Gemini Live
gemini-1.5-pro genie falcon-mamba gemini-1.5 llamaindex google anthropic tii supabase perplexity-ai llamaindex openai hugging-face multimodality benchmarking long-context retrieval-augmented-generation open-source model-releases model-integration model-performance software-engineering linear-algebra hugging-face-hub debugging omarsar0 osanseviero dbrxmosaicai alphasignalai perplexity_ai _jasonwei svpino
Google launched Gemini Live on Android for Gemini Advanced subscribers during the Pixel 9 event, featuring integrations with Google Workspace apps and other Google services. The rollout began on 8/12/2024, with iOS support planned. Anthropic released Genie, an AI software engineering system achieving a 57% improvement on SWE-Bench. TII introduced Falcon Mamba, a 7B attention-free open-access model scalable to long sequences. Benchmarking showed that longer context lengths do not always improve Retrieval-Augmented Generation. Supabase launched an AI-powered Postgres service dubbed the "ChatGPT of databases," fully open source. Perplexity AI partnered with Polymarket to integrate real-time probability predictions into search results. A tutorial demonstrated a multimodal recipe recommender using Qdrant, LlamaIndex, and Gemini. An OpenAI engineer shared success tips emphasizing debugging and hard work. The connection between matrices and graphs in linear algebra was highlighted for insights into nonnegative matrices and strongly connected components. Keras 3.5.0 was released with Hugging Face Hub integration for model saving and loading.
Gemma 2 2B + Scope + Shield
gemma-2b gemma-2-9b gemma-2-27b llama-3-1-405b sam-2 gpt-3.5 vicuna alpacaeval g-eval google-deepmind anthropic meta-ai-fair openai perplexity-ai nvidia lmsys knowledge-distillation leaderboards model-interpretability finetuning harm-detection video-segmentation voice publishers-program robotics-data-scaling quantization llm-evaluation prompt-engineering
Gemma 2B, a 2 billion parameter model trained on 2 trillion tokens and distilled from a larger unnamed LLM, has been released by Google DeepMind and shows strong leaderboard performance despite weaknesses in math. The Gemma series, including 9B and 27B models, has gained popularity since its June release. The team also released 400 SAEs for interpretability, inspired by Anthropic's research. A finetuned classifier called ShieldGemma outperforms Meta's LlamaGuard in harm detection. Meanwhile, Meta AI announced Llama-3.1-405B reaching #3 on the Overall Arena leaderboard, and released SAM 2, a video and image segmentation model with significant speed improvements. OpenAI is rolling out an advanced Voice Mode to Plus users. Perplexity AI launched a Publishers Program with major media partners and a status page. NVIDIA introduced Project GR00T for scaling robot data using Apple Vision Pro and generative simulation. Interest in quantization for compressing LLMs is growing, and LLM-as-a-Judge implementations from Vicuna, AlpacaEval, and G-Eval highlight the effectiveness of simple prompts and domain-specific evaluation.
Not much happened today.
phi-3-mini gpt4all-3.0 yi-large meta-3d-gen meta perplexity-ai microsoft gpt4all langchainai qdrant-engine 3d-generation long-context instruction-following reinforcement-learning-from-human-feedback persona-driven-data-synthesis meta-tuning model-steering memory-retrieval multivector-search universal-query-api rohanpaul_ai andriy_mulyar cwolferesearch sarahookr
Meta introduced Meta 3D Gen, a system for end-to-end generation of 3D assets from text in under 1 minute, producing high-quality 3D assets with detailed textures. Perplexity AI updated Pro Search to handle deeper research with multi-step reasoning and code execution. Microsoft improved Phi-3 Mini with better long-context understanding and instruction following. GPT4All 3.0 launched with support for thousands of models and major OS compatibility, featuring local file chat. Yi-Large model launched on Fireworks AI Playground. Research highlights include the evolution of reinforcement learning from human feedback (RLHF), persona-driven data synthesis using a billion diverse personas, meta-tuning for few-shot generalization, and steering vectors for model behavior control. Tools updates include LangSmith improving memory retrieval and Qdrant Engine v1.10 adding universal query API and multivector search.
Contextual Position Encoding (CoPE)
cope gemini-1.5-flash gemini-1.5-pro claude gpt-3 meta-ai-fair google-deepmind anthropic perplexity-ai langchain openai positional-encoding transformers counting copying language-modeling coding external-memory tool-use model-evaluation inference-speed model-benchmarking scaling research-synthesis jason-weston alexandr-wang karpathy arav-srinivas
Meta AI researcher Jason Weston introduced CoPE, a novel positional encoding method for transformers that incorporates context to create learnable gates, enabling improved handling of counting and copying tasks and better performance on language modeling and coding. The approach can potentially be extended with external memory for gate calculation. Google DeepMind released Gemini 1.5 Flash and Pro models optimized for fast inference. Anthropic announced general availability of tool use for Claude, enhancing its ability to orchestrate tools for complex tasks. Alexandr Wang launched SEAL Leaderboards for private, expert evaluations of frontier models. Karpathy reflected on the 4th anniversary of GPT-3, emphasizing scaling and practical improvements. Perplexity AI launched Perplexity Pages to convert research into visually appealing articles, described as an "AI Wikipedia" by Arav Srinivas.
ALL of AI Engineering in One Place
claude-3-sonnet claude-3 openai google-deepmind anthropic mistral-ai cohere hugging-face adept midjourney character-ai microsoft amazon nvidia salesforce mastercard palo-alto-networks axa novartis discord twilio tinder khan-academy sourcegraph mongodb neo4j hasura modular cognition anysphere perplexity-ai groq mozilla nous-research galileo unsloth langchain llamaindex instructor weights-biases lambda-labs neptune datastax crusoe covalent qdrant baseten e2b octo-ai gradient-ai lancedb log10 deepgram outlines crew-ai factory-ai interpretability feature-steering safety multilinguality multimodality rag evals-ops open-models code-generation gpus agents ai-leadership
The upcoming AI Engineer World's Fair in San Francisco from June 25-27 will feature a significantly expanded format with booths, talks, and workshops from top model labs like OpenAI, DeepMind, Anthropic, Mistral, Cohere, HuggingFace, and Character.ai. It includes participation from Microsoft Azure, Amazon AWS, Google Vertex, and major companies such as Nvidia, Salesforce, Mastercard, Palo Alto Networks, and more. The event covers 9 tracks including RAG, multimodality, evals/ops, open models, code generation, GPUs, agents, AI in Fortune 500, and a new AI leadership track. Additionally, Anthropic shared interpretability research on Claude 3 Sonnet, revealing millions of interpretable features that can be steered to modify model behavior, including safety-relevant features related to bias and unsafe content, though more research is needed for practical applications. The event offers a discount code for AI News readers.
Not much happened today
command-r-35b goliath-120 miqu-120 llama-3-8b tensorrt-llm llama-cpp gpt2-chat gpt-4-turbo llama-3 deepmind-alphazero anthropic openai perplexity-ai amazon apple microsoft deepmind creative-writing context-windows benchmarking model-performance self-learning function-calling retrieval-augmented-generation ai-assistants on-device-ai ai-lobbying copyright-infringement code-reasoning image-generation
Anthropic released a team plan and iOS app about 4 months after OpenAI. The Command-R 35B model excels at creative writing, outperforming larger models like Goliath-120 and Miqu-120. The Llama-3 8B model now supports a 1 million token context window, improving long-context understanding with minimal training on a single 8xA800 GPU machine. TensorRT-LLM benchmarks show it is 30-70% faster than llama.cpp on consumer hardware. A benchmark suggests GPT2-Chat may have better reasoning than GPT-4-Turbo, though results are debated. Demos include a self-learning Llama-3 voice agent running locally on Jetson Orin and a Self-Learning Large Action Model (LAM). Amazon CodeWhisperer was renamed to Q Developer, expanding its generative AI assistant capabilities. Apple plans an AI-enabled Safari browser with an on-device LLM in iOS 18 and macOS 15. Big Tech dominates AI lobbying in Washington, while major U.S. newspapers sued OpenAI and Microsoft for copyright infringement. DeepMind's AlphaZero became the greatest chess player in 9 hours, and their Naturalized Execution Tuning (NExT) method improves LLM code reasoning by 14-26%. Stable Diffusion is used for diverse image generation applications.
Perplexity, the newest AI unicorn
llama-3-8b llama-3-70b llama-3 llava-llama-3-8b-v1_1 phi-3 gpt-3.5 perplexity-ai meta-ai-fair hugging-face groq context-length fine-tuning quantization instruction-following model-comparison multimodality benchmarking memory-optimization model-performance daniel-gross aravind-srinivas
Perplexity doubles its valuation shortly after its Series B with a Series B-1 funding round. Significant developments around Llama 3 include context length extension to 16K tokens, new multimodal LLaVA models outperforming Llama 2, and fine-tuning improvements like QDoRA surpassing QLoRA. The Llama-3-70B model is praised for instruction following and performance across quantization formats. Phi-3 models by Meta AI released in multiple sizes show competitive benchmark results, with the 14B model achieving 78% on MMLU and the 3.8B model nearing GPT-3.5 performance.
Grok-1 in Bio
grok-1 mixtral miqu-70b claude-3-opus claude-3 claude-3-haiku xai mistral-ai perplexity-ai groq anthropic openai mixture-of-experts model-release model-performance benchmarking finetuning compute hardware-optimization mmlu model-architecture open-source memes sam-altman arthur-mensch daniel-han arav-srinivas francis-yao
Grok-1, a 314B parameter Mixture-of-Experts (MoE) model from xAI, has been released under an Apache 2.0 license, sparking discussions on its architecture, finetuning challenges, and performance compared to models like Mixtral and Miqu 70B. Despite its size, its MMLU benchmark performance is currently unimpressive, with expectations that Grok-2 will be more competitive. The model's weights and code are publicly available, encouraging community experimentation. Sam Altman highlighted the growing importance of compute resources, while Grok's potential deployment on Groq hardware was noted as a possible game-changer. Meanwhile, Anthropic's Claude continues to attract attention for its "spiritual" interaction experience and consistent ethical framework. The release also inspired memes and humor within the AI community.
Inflection-2.5 at 94% of GPT4, and Pi at 6m MAU
inflection-2.5 claude-3-sonnet claude-3-opus gpt-4 yi-9b mistral inflection anthropic perplexity-ai llamaindex mistral-ai langchain retrieval-augmented-generation benchmarking ocr structured-output video-retrieval knowledge-augmentation planning tool-use evaluation code-benchmarks math-benchmarks mustafa-suleyman amanda-askell jeremyphoward abacaj omarsar0
Mustafa Suleyman announced Inflection 2.5, which achieves more than 94% the average performance of GPT-4 despite using only 40% the training FLOPs. Pi's user base is growing about 10% weekly, with new features like realtime web search. The community noted similarities between Inflection 2.5 and Claude 3 Sonnet. Claude 3 Opus outperformed GPT-4 in a 1.5:1 vote and is now the default for Perplexity Pro users. Anthropic added experimental tool calling support for Claude 3 via LangChain. LlamaIndex released LlamaParse JSON Mode for structured PDF parsing and added video retrieval via VideoDB, enabling retrieval-augmented generation (RAG) pipelines. A paper proposed knowledge-augmented planning for LLM agents. New benchmarks like TinyBenchmarks and the Yi-9B model release show strong code and math performance, surpassing Mistral.
Stable Diffusion 3 — Rombach & Esser did it again!
stable-diffusion-3 claude-3 orca dolphincoder-starcoder2-15b stability-ai anthropic microsoft latitude perplexity-ai llamaindex tripo-ai diffusion-models multimodality benchmarking human-evaluation text-generation image-generation 3d-modeling fine-tuning roleplay coding dataset-release soumith-chintala bill-peebles swyx kevinafischer jeremyphoward akhaliq karinanguyen_ aravsrinivas
Over 2500 new community members joined following Soumith Chintala's shoutout, highlighting growing interest in SOTA LLM-based summarization. The major highlight is the detailed paper release of Stable Diffusion 3 (SD3), showcasing advanced text-in-image control and complex prompt handling, with the model outperforming other SOTA image generation models in human-evaluated benchmarks. The SD3 model is based on an enhanced Diffusion Transformer architecture called MMDiT. Meanwhile, Anthropic released Claude 3 models, noted for human-like responses and emotional depth, scoring 79.88% on HumanEval but costing over twice as much as GPT-4. Microsoft launched new Orca-based models and datasets, and Latitude released DolphinCoder-StarCoder2-15b with strong coding capabilities. Integration of image models by Perplexity AI and 3D CAD generation by PolySpectra powered by LlamaIndex were also highlighted. "SD3's win rate beats all other SOTA image gen models (except perhaps Ideogram)" and "Claude 3 models are very good at generating d3 visualizations from text descriptions."
Welcome Interconnects and OpenRouter
mistral-large miqu mixtral gpt-4 mistral-7b mistral-ai openai perplexity-ai llamaindex qwen langchain model-comparison model-optimization quantization role-playing story-writing code-clarity ai-assisted-decompilation asynchronous-processing quantum-computing encoder-based-diffusion open-source hardware-experimentation rag-systems nathan-lambert alex-atallah
Discord communities analyzed 22 guilds, 349 channels, and 12885 messages revealing active discussions on model comparisons and optimizations involving Mistral AI, Miqu, and GGUF quantized models. Highlights include comparing Mistral Large with GPT-4, focusing on cost-effectiveness and performance, and exploring quantization techniques like GPTQ and QLORA to reduce VRAM usage. Advanced applications such as role-playing, story-writing, code clarity, and AI-assisted decompilation were emphasized, alongside development of tools like an asynchronous summarization script for Mistral 7b. The intersection of quantum computing and AI was discussed, including DARPA-funded projects and encoder-based diffusion techniques for image processing. Community efforts featured new Spanish LLM announcements, hardware experimentation, and open-source initiatives, with platforms like Perplexity AI and LlamaIndex noted for innovation and integration. Speculation about Mistral AI's open-source commitment and tools like R2R for rapid RAG deployment highlighted collaborative spirit.
1/2/2024: Smol tweaks to Smol Talk
claude-2 bard copilot meta-ai gemini-ultra chatgpt openai meta-ai-fair perplexity-ai prompt-engineering api json yaml markdown chatbot image-generation vpn browser-compatibility personality-tuning plugin-issues
OpenAI Discord discussions highlight a detailed comparison of AI search engines including Perplexity, Copilot, Bard, and Claude 2, with Bard and Claude 2 trailing behind. Meta AI chatbot by Meta is introduced, available on Instagram and Whatsapp, featuring image generation likened to a free GPT version. Users report multiple browser issues with ChatGPT, including persistent captchas when using VPNs and plugin malfunctions. Debates cover prompt engineering, API usage, and data formats like JSON, YAML, and Markdown. Discussions also touch on ChatGPT's personality tuning and model capability variations. "Meta AI includes an image generation feature, which he likened to a free version of GPT."
1/1/2024: How to start with Open Source AI
gpt-4-turbo dall-e-3 chatgpt openai microsoft perplexity-ai prompt-engineering ai-reasoning custom-gpt performance python knowledge-integration swyx
OpenAI Discord discussions revealed mixed sentiments about Bing's AI versus ChatGPT and Perplexity AI, and debated Microsoft Copilot's integration with Office 365. Users discussed DALL-E 3 access within ChatGPT Plus, ChatGPT's performance issues, and ways to train a GPT model using book content via OpenAI API or custom GPTs. Anticipation for GPT-4 turbo in Microsoft Copilot was noted alongside conversations on AI reasoning, prompt engineering, and overcoming Custom GPT glitches. Advice for AI beginners included starting with Python and using YAML or Markdown for knowledge integration. The future of AI with multiple specialized GPTs and Microsoft Copilot's role was also explored.
12/21/2023: The State of AI (according to LangChain)
mixtral gpt-4 chatgpt bard dall-e langchain openai perplexity-ai microsoft poe model-consistency model-behavior response-quality chatgpt-usage-limitations error-handling user-experience model-comparison hallucination-detection prompt-engineering creative-ai
LangChain launched their first report based on LangSmith stats revealing top charts for mindshare. On OpenAI's Discord, users raised issues about the Mixtral model, noting inconsistencies and comparing it to Poe's Mixtral. There were reports of declining output quality and unpredictable behavior in GPT-4 and ChatGPT, with discussions on differences between Playground GPT-4 and ChatGPT GPT-4. Users also reported anomalous behavior in Bing and Bard AI models, including hallucinations and strange assertions. Various user concerns included message limits on GPT-4, response completion errors, chat lags, voice setting inaccessibility, password reset failures, 2FA issues, and subscription restrictions. Techniques for guiding GPT-4 outputs and creative uses with DALL-E were also discussed. Users highlighted financial constraints affecting subscriptions and queries about earning with ChatGPT and token costs.
12/14/2023: $1e7 for Superalignment
gemini bard gpt-4 gpt-4.5 llama-2 openai llamaindex perplexity-ai prompt-engineering api custom-gpt json bug-fixes chatbots performance tts code-generation image-recognition jan-leike patrick-collison
Jan Leike is launching a new grant initiative inspired by Patrick Collison's Fast Grants to support AI research. OpenAI introduced a new developers Twitter handle @OpenAIDevs for community updates. Discussions on OpenAI's Gemini and Bard chatbots highlight their ability to read each other's instructions and offer unique coding solutions. Users reported various issues with GPT-4, including performance problems, customization difficulties, and a resolved bug in image recognition. There are ongoing conversations about prompt engineering challenges and new JSON mode support in Convo-lang for API use. Concerns about misuse of chatbots for illegal activities and alternatives like Llama2 models and the Perplexity chatbot were also discussed.