All tags
Company: "llamaindex"
Apple exposes Foundation Models API and... no new Siri
chatgpt apple openai langchain llamaindex on-device-ai foundation-models reasoning reinforcement-learning voice translation software-automation agentic-workflows gdb scaling01 giffmana kevinweil
Apple released on-device foundation models for iOS developers, though their recent "Illusion of Reasoning" paper faced significant backlash for flawed methodology regarding LLM reasoning. OpenAI updated ChatGPT's Advanced Voice Mode with more natural voice and improved translation, demonstrated by Greg Brockman. LangChain and LlamaIndex launched new AI agents and tools, including a SWE Agent for software automation and an Excel agent using reinforcement learning for data transformation. The AI community engaged in heated debate over reasoning capabilities of LLMs, highlighting challenges in evaluation methods.
ChatGPT responds to GlazeGate + LMArena responds to Cohere
qwen3-235b-a22b qwen3 qwen3-moe llama-4 openai cohere lm-arena deepmind x-ai meta-ai-fair alibaba vllm llamaindex model-releases model-benchmarking performance-evaluation open-source multilinguality model-integration fine-tuning model-optimization joannejang arankomatsuzaki karpathy sarahookr reach_vb
OpenAI faced backlash after a controversial ChatGPT update, leading to an official retraction admitting they "focused too much on short-term feedback." Researchers from Cohere published a paper criticizing LMArena for unfair practices favoring incumbents like OpenAI, DeepMind, X.ai, and Meta AI Fair. The Qwen3 family by Alibaba was released, featuring models up to 235B MoE, supporting 119 languages and trained on 36 trillion tokens, with integration into vLLM and support in tools like llama.cpp. Meta announced the second round of Llama Impact Grants to promote open-source AI innovation. Discussions on AI Twitter highlighted concerns about leaderboard overfitting and fairness in model benchmarking, with notable commentary from karpathy and others.
LlamaCon: Meta AI gets into the Llama API platform business
llama-4 qwen3 qwen3-235b-a22b qwen3-30b-a3b qwen3-4b qwen2-5-72b-instruct o3-mini meta-ai-fair cerebras groq alibaba vllm ollama llamaindex hugging-face llama-cpp model-release fine-tuning reinforcement-learning moe multilingual-models model-optimization model-deployment coding benchmarking apache-license reach_vb huybery teortaxestex awnihannun thezachmueller
Meta celebrated progress in the Llama ecosystem at LlamaCon, launching an AI Developer platform with finetuning and fast inference powered by Cerebras and Groq hardware, though it remains waitlisted. Meanwhile, Alibaba released the Qwen3 family of large language models, including two MoE models and six dense models ranging from 0.6B to 235B parameters, with the flagship Qwen3-235B-A22B achieving competitive benchmark results and supporting 119 languages and dialects. The Qwen3 models are optimized for coding and agentic capabilities, are Apache 2.0 licensed, and have broad deployment support including local usage with tools like vLLM, Ollama, and llama.cpp. Community feedback highlights Qwen3's scalable performance and superiority over models like OpenAI's o3-mini.
Grok 3 & 3-mini now API Available
grok-3 grok-3-mini gemini-2.5-flash o3 o4-mini llama-4-maverick gemma-3-27b openai llamaindex google-deepmind epochairesearch goodfireai mechanize agent-development agent-communication cli-tools reinforcement-learning model-evaluation quantization-aware-training model-compression training-compute hybrid-reasoning model-benchmarking
Grok 3 API is now available, including a smaller version called Grok 3 mini, which offers competitive pricing and full reasoning traces. OpenAI released a practical guide for building AI agents, while LlamaIndex supports the Agent2Agent protocol for multi-agent communication. Codex CLI is gaining traction with new features and competition from Aider and Claude Code. GoogleDeepMind launched Gemini 2.5 Flash, a hybrid reasoning model topping the Chatbot Arena leaderboard. OpenAI's o3 and o4-mini models show emergent behaviors from large-scale reinforcement learning. EpochAIResearch updated its methodology, removing Maverick from high FLOP models as Llama 4 Maverick training compute drops. GoodfireAI announced a $50M Series A for its Ember neural programming platform. Mechanize was founded to build virtual work environments and automation benchmarks. GoogleDeepMind's Quantisation Aware Training for Gemma 3 models reduces model size significantly, with open source checkpoints available.
not much happened today
aya-vision-8b aya-vision-32b llama-3-2-90b-vision molmo-72b phi-4-mini phi-4-multimodal cogview4 wan-2-1 weights-and-biases coreweave cohereforai microsoft alibaba google llamaindex weaviate multilinguality vision multimodality image-generation video-generation model-releases benchmarking funding agentic-ai model-performance mervenoyann reach_vb jayalammar sarahookr aidangomez nickfrosst dair_ai akhaliq bobvanluijt jerryjliu0
Weights and Biases announced a $1.7 billion acquisition by CoreWeave ahead of CoreWeave's IPO. CohereForAI released the Aya Vision models (8B and 32B parameters) supporting 23 languages, outperforming larger models like Llama-3.2 90B Vision and Molmo 72B. Microsoft introduced Phi-4-Mini (3.8B parameters) and Phi-4-Multimodal models, excelling in math, coding, and multimodal benchmarks. CogView4, a 6B parameter text-to-image model with 2048x2048 resolution and Apache 2.0 license, was released. Alibaba launched Wan 2.1, an open-source video generation model with 720p output and 16 fps generation. Google announced new AI features for Pixel devices including Scam Detection and Gemini integrations. LlamaCloud reached General Availability and raised $19M Series A funding, serving over 100 Fortune 500 companies. Weaviate launched the Query Agent, the first of three Weaviate Agents.
LLaDA: Large Language Diffusion Models
llada-8b llama-3-8b step-video-t2v-30b step-audio-chat-132b llama-2-7b stepfun-ai scale-ai cambridge llamaindex diffusion-models text-generation multimodality video-generation voice-processing benchmarking instruction-following model-scaling gpu-usage long-context multi-turn-dialogue arankomatsuzaki _akhaliq omarsar0 iscienceluvr gallabytes maximelabonne reach_vb
LLaDA (Large Language Diffusion Model) 8B is a breakthrough diffusion-based language model that rivals LLaMA 3 8B while training on 7x fewer tokens (2 trillion tokens) and using 0.13 million H800 GPU hours. It introduces a novel text generation approach by predicting uniformly masked tokens in a diffusion process, enabling multi-turn dialogue and instruction-following. Alongside, StepFun AI released two major models: Step-Video-T2V 30B, a text-to-video model generating up to 204 frames with high coherence and motion quality, and Step-Audio-Chat 132B, a voice-to-voice model. Additionally, challenging multimodal benchmarks like Scale AI's EnigmaEval and Cambridge's ZeroBench highlight current frontier models scoring zero, emphasizing the difficulty of these tasks. The community also noted the return of diffusion models in language modeling, a previously speculative architecture now scaled successfully.
Mistral Small 3 24B and Tulu 3 405B
mistral-small-3 tulu-3-405b llama-3 tiny-swallow-1.5b qwen-2.5-max deepseek-v3 claude-3.5-sonnet gemini-1.5-pro gpt4o-mini llama-3-3-70b mistral-ai ai2 sakana-ai alibaba_qwen deepseek ollama llamaindex reinforcement-learning model-fine-tuning local-inference model-performance model-optimization on-device-ai instruction-following api training-data natural-language-processing clementdelangue dchaplot reach_vb
Mistral AI released Mistral Small 3, a 24B parameter model optimized for local inference with low latency and 81% accuracy on MMLU, competing with Llama 3.3 70B, Qwen-2.5 32B, and GPT4o-mini. AI2 released Tülu 3 405B, a large finetuned model of Llama 3 using Reinforcement Learning from Verifiable Rewards (RVLR), competitive with DeepSeek v3. Sakana AI launched TinySwallow-1.5B, a Japanese language model using TAID for on-device use. Alibaba_Qwen released Qwen 2.5 Max, trained on 20 trillion tokens, with performance comparable to DeepSeek V3, Claude 3.5 Sonnet, and Gemini 1.5 Pro, and updated API pricing. These releases highlight advances in open models, efficient inference, and reinforcement learning techniques.
not much happened today
helium-1 qwen-2.5 phi-4 sky-t1-32b-preview o1 codestral-25.01 phi-3 mistral llama-3 gpt-3.5 llama-3 gpt-3.5 llmquoter kyutai-labs lmstudio mistralai llamaindex huggingface langchainai hyperbolic-labs replit fchollet philschmid multilinguality token-level-distillation context-windows model-performance open-source reasoning coding retrieval-augmented-generation hybrid-retrieval multiagent-systems video large-video-language-models dynamic-ui voice-interaction gpu-rentals model-optimization semantic-deduplication model-inference reach_vb awnihannun lior_on_ai sophiamyang omarsar0 skirano yuchenj_uw fchollet philschmid
Helium-1 Preview by kyutai_labs is a 2B-parameter multilingual base LLM outperforming Qwen 2.5, trained on 2.5T tokens with a 4096 context size using token-level distillation from a 7B model. Phi-4 (4-bit) was released in lmstudio on an M4 max, noted for speed and performance. Sky-T1-32B-Preview is a $450 open-source reasoning model matching o1's performance with strong benchmark scores. Codestral 25.01 by mistralai is a new SOTA coding model supporting 80+ programming languages and offering 2x speed.
Innovations include AutoRAG for optimizing retrieval-augmented generation pipelines, Agentic RAG for autonomous query reformulation and critique, Multiagent Finetuning using societies of models like Phi-3, Mistral, LLaMA-3, and GPT-3.5 for reasoning improvements, and VideoRAG incorporating video content into RAG with LVLMs.
Applications include a dynamic UI AI chat app by skirano on Replit, LangChain tools like DocTalk for voice PDF conversations, AI travel agent tutorials, and news summarization agents. Hyperbolic Labs offers competitive GPU rentals including H100, A100, and RTX 4090. LLMQuoter enhances RAG accuracy by identifying key quotes.
Infrastructure updates include MLX export for LLM inference from Python to C++ by fchollet and SemHash semantic text deduplication by philschmid.
Moondream 2025.1.9: Structured Text, Enhanced OCR, Gaze Detection in a 2B Model
o1 vdr-2b-multi-v1 llava-mini openai llamaindex langchainai qdrant genmoai vision model-efficiency structured-output gaze-detection reasoning model-distillation multimodality embedding-models gan diffusion-models self-attention training-optimizations development-frameworks api cross-language-deployment semantic-search agentic-document-processing developer-experience philschmid saranormous jxmnop reach_vb iscienceluvr multimodalart arohan adcock_brett awnihannun russelljkaplan ajayj_
Moondream has released a new version that advances VRAM efficiency and adds structured output and gaze detection, marking a new frontier in vision model practicality. Discussions on Twitter highlighted advancements in reasoning models like OpenAI's o1, model distillation techniques, and new multimodal embedding models such as vdr-2b-multi-v1 and LLaVA-Mini, which significantly reduce computational costs. Research on GANs and decentralized diffusion models showed improved stability and performance. Development tools like MLX and vLLM received updates for better portability and developer experience, while frameworks like LangChain and Qdrant enable intelligent data workflows. Company updates include new roles and team expansions at GenmoAI. "Efficiency tricks are all you need."
not much happened today
vllm deepseek-v3 llamaindex openai deepseek qdrant twilio llamaindex elevenlabs training-efficiency parallelism cpu-offloading gradient-descent mixture-of-experts fp8-precision memory-optimization ai-voice-assistants coding-assistants document-processing version-control learning-rate-schedules federated-learning agentic-systems multi-agent-systems deliberative-alignment chain-of-thought on-device-ai multimodality francois-fleuret daniel-hanchen aaron-defazio fchollet elad-gil wojciech-zaremba richard-socher
ChatGPT, Sora, and the OpenAI API experienced a >5 hour outage but are now restored. Updates to vLLM enable DeepSeek-V3 to run with enhanced parallelism and CPU offloading, improving model deployment flexibility. Discussions on gradient descent in top-k routing MoE and adoption of FP8 precision focus on training efficiency and memory optimization. AIDE, an AI voice medical assistant by Team Therasync, leverages Qdrant, OpenAI, and Twilio. DeepSeek-Engineer offers AI-powered coding assistance with structured outputs. LlamaIndex integrates LlamaCloud and ElevenLabs for large-scale document processing and voice interaction. Insights on version control with ghstack and advocacy for linear decay learning rate schedules highlight best practices in AI development. Experts predict smaller, tighter models, true multimodal models, and on-device AI in 2025. Proposals for planetary-scale federated learning and community AGI moonshots emphasize future AI directions. Discussions on agentic systems, multi-agent workflows, and deliberative alignment through chain of thought reasoning underscore AI safety and alignment efforts.
not much happened today
qwen-o1 qvq claude-3.5-sonnet gpt-4o o3 o3-mini alibaba openai mit idsia llamaindex ollama vision benchmarking llm-calibration intentionality alignment-faking deliberative-alignment artificial-life gdpr-compliance contract-review-agent app-creation synthetic-data post-transformers smol-models agents bret-taylor
The Qwen team launched QVQ, a vision-enabled version of their experimental QwQ o1 clone, benchmarking comparably to Claude 3.5 Sonnet. Discussions include Bret Taylor's insights on autonomous software development distinct from the Copilot era. The Latent Space LIVE! talks cover highlights of 2024 AI startups, vision, open models, post-transformers, synthetic data, smol models, and agents. Twitter recaps by Claude 3.5 Sonnet highlight proposals for benchmarks measuring LLM calibration and falsehood confidence, with QVQ outperforming GPT-4o and Claude Sonnet 3.5. AI alignment debates focus on intentionality and critiques of alignment faking in models like Claude. Updates from OpenAI include new o3 and o3-mini models and a deliberative alignment strategy. The ASAL project is a collaboration between MIT, OpenAI, and Swiss AI Lab IDSIA to automate artificial life discovery. Personal stories reveal frustrations with USCIS green card denials despite high qualifications. New tools like GeminiCoder enable rapid app creation, and a contract review agent using Reflex and Llama Index checks GDPR compliance. Holiday greetings and memes were also shared.
Meta BLT: Tokenizer-free, Byte-level LLM
byte-latent-transformer llama-3 phi-4 gpt-4o command-r7b meta-ai-fair llamaindex microsoft deepseek-ai openai cohere anthropic tokenization transformer-architecture model-efficiency benchmarking multimodality vision reinforcement-learning model-scaling jailbreaking model-optimization
Meta AI introduces the Byte Latent Transformer (BLT), a tokenizer-free architecture that dynamically forms byte patches for efficient compute allocation, outperforming Llama 3 on benchmarks including the CUTE benchmark. The model was trained on approximately 1 trillion tokens and features a three-block transformer design with local and global components. This approach challenges traditional tokenization and may enable new multimodal capabilities such as direct file interaction without retrieval-augmented generation. Additionally, Microsoft announced the Phi-4 14B parameter model achieving state-of-the-art results on STEM and reasoning benchmarks, surpassing GPT-4o. DeepSeek AI launched new vision-language models based on their MoE architecture with sizes ranging from 1.0B to 27B parameters. OpenAI released a new Projects feature for ChatGPT, and Cohere introduced their smallest and fastest Command R7B model. Anthropic published research on "Best-of-N Jailbreaking" vulnerabilities across text, vision, and audio models. Industry discussion highlights a trend of decreasing frontier LLM sizes, with GPT-4 at approximately 1.8 trillion parameters compared to newer models.
$200 ChatGPT Pro and o1-full/pro, with vision, without API, and mixed reviews
o1 o1-pro claude-3.5-sonnet pali-gemma-2 openai google llamaindex multimodality vision fine-tuning benchmarking model-performance image-generation document-processing model-release sama bindureddy mervenoyann fchollet
OpenAI launched the o1 model with multimodal capabilities, faster reasoning, and image input support, marking it as a state-of-the-art model despite some bugs and mixed community reviews. The new o1-pro tier offers unlimited access for $200/month with notable benchmark improvements but some performance trade-offs compared to claude-3.5-sonnet. Google released the PaliGemma 2 vision-language model family in sizes 3B, 10B, and 28B, excelling in visual question answering, image segmentation, and OCR, with day-0 support for fine-tuning. LlamaIndex announced discounts and feature updates for large-scale document processing. The AI community also reacted humorously to the new pricing tiers and model comparisons. "o1 can see now, which makes it the SOTA multimodal model" and "most users will be best served by free/Plus tiers" were notable sentiments.
not much happened today
claude-3.5-sonnet opencoder anthropic microsoft sambanova openai langchain llamaindex multi-agent-systems natural-language-interfaces batch-processing harmful-content-detection secret-management retrieval-augmented-generation error-analysis memory-management web-scraping autonomous-agents sophiamyang tom_doerr omarsar0 _akhaliq andrewyng giffmana
This week in AI news, Anthropic launched Claude Sonnet 3.5, enabling desktop app control via natural language. Microsoft introduced Magentic-One, a multi-agent system built on the AutoGen framework. OpenCoder was unveiled as an AI-powered code cookbook for large language models. SambaNova is sponsoring a hackathon with prizes up to $5000 for building real-time AI agents. Sophiamyang announced new Batch and Moderation APIs with 50% lower cost and multi-dimensional harmful text detection. Open-source tools like Infisical for secret management, CrewAI for autonomous agent orchestration, and Crawlee for web scraping were released. Research highlights include SCIPE for error analysis in LLM chains, Context Refinement Agent for improved retrieval-augmented generation, and MemGPT for managing LLM memory. The week also saw a legal win for OpenAI in the RawStory copyright case, affirming that facts used in LLM training are not copyrightable.
not much happened today
llama-3-2-vision gpt-2 meta-ai-fair ollama amd llamaindex gemini gitpod togethercompute langchainai weights-biases stanfordnlp deeplearningai model-scaling neural-networks multi-gpu-support skip-connections transformers healthcare-ai automated-recruitment zero-trust-security small-language-models numerical-processing chain-of-thought optical-character-recognition multi-agent-systems agent-memory interactive-language-learning bindureddy fstichler stasbekman jxmnop bindureddy omarsar0 giffmana rajammanabrolu
This week in AI news highlights Ollama 0.4 supporting Meta's Llama 3.2 Vision models (11B and 90B), with applications like handwriting recognition. Self-Consistency Preference Optimization (ScPO) was introduced to improve model consistency without human labels. Discussions on model scaling, neural networks resurgence, and AMD's multi-GPU bandwidth challenges were noted. The importance of skip connections in Transformers was emphasized. In healthcare, less regulation plus AI could revolutionize disease treatment and aging. Tools like LlamaParse and Gemini aid automated resume insights. Gitpod Flex demonstrated zero-trust architecture for secure development environments. Research includes surveys on Small Language Models (SLMs), number understanding in LLMs, and DTrOCR using a GPT-2 decoder for OCR. Multi-agent systems in prediction markets were discussed by TogetherCompute and LangChainAI. Community events include NeurIPS Happy Hour, NLP seminars, and courses on Agent Memory with LLMs as operating systems.
Tencent's Hunyuan-Large claims to beat DeepSeek-V2 and Llama3-405B with LESS Data
claude-3.5-haiku llama-3-1 llama-3-2 mlx-lm tencent anthropic meta-ai-fair togethercompute llamaindex mixture-of-experts synthetic-data model-scaling model-architecture model-optimization kv-cache-quantization react fine-tuning scaling-laws model-efficiency model-deployment multimodality
Tencent released a notable >300B parameter MoE model pretrained on 7T tokens, including 1.5T synthetic data generated via Evol-Instruct. The model introduces novel techniques like "recycle routing" and expert-specific learning rates, alongside a compute-efficient scaling law for MoE active parameters. However, its custom license restricts use in the EU and by companies with over 100M MAU, and it avoids China-sensitive queries. Meanwhile, Anthropic launched Claude 3.5 Haiku, now available on multiple platforms, praised for intelligence and speed but criticized for a 10x price increase. Meta opened Llama AI to the U.S. defense sector, and a Llama Impact Hackathon offers a $15K prize for projects using Llama 3.1 & 3.2 Vision. LlamaIndex released a React chat UI component with Tailwind CSS and LLM backend integrations. The MLX LM model advances text generation speed and efficiency with KV cache quantization.
not much happened today
llama-3.1-nemotron-70b golden-gate-claude embed-3 liquid-ai anthropic cohere openai meta-ai-fair nvidia perplexity-ai langchain kestra ostrisai llamaindex feature-steering social-bias multimodality model-optimization workflow-orchestration inference-speed event-driven-workflows knowledge-backed-agents economic-impact ai-national-security trust-dynamics sam-altman lmarena_ai aravsrinivas svpino richardmcngo ajeya_cotra tamaybes danhendrycks jerryjliu0
Liquid AI held a launch event introducing new foundation models. Anthropic shared follow-up research on social bias and feature steering with their "Golden Gate Claude" feature. Cohere released multimodal Embed 3 embeddings models following Aya Expanse. There was misinformation about GPT-5/Orion debunked by Sam Altman. Meta AI FAIR announced Open Materials 2024 with new models and datasets for inorganic materials discovery using the EquiformerV2 architecture. Anthropic AI demonstrated feature steering to balance social bias and model capabilities. NVIDIA's Llama-3.1-Nemotron-70B ranked highly on the Arena leaderboard with style control. Perplexity AI expanded to 100M weekly queries with new finance and reasoning modes. LangChain emphasized real application integration with interactive frame interpolation. Kestra highlighted scalable event-driven workflows with open-source YAML-based orchestration. OpenFLUX optimized inference speed by doubling it through guidance LoRA training. Discussions on AI safety included trust dynamics between humans and AI, economic impacts of AI automation, and the White House AI National Security memo addressing cyber and biological risks. LlamaIndex showcased knowledge-backed agents for enhanced AI applications.
Not much technical happened today
whisper-v3-turbo llama-3 llamaindex openai poolside liquidai perplexity-ai meta-ai-fair cohere fujitsu mixture-of-experts context-windows model-optimization fine-tuning quantization model-training alignment synthetic-data model-architecture agentic-ai nick-turley arav-srinivas francois-fleuret finbarr-timbers lewtun francois-chollet jerry-j-liu mmitchell-ai jxnlco
OpenAI announced raising $6.6B in new funding at a $157B valuation, with ChatGPT reaching 250M weekly active users. Poolside raised $500M to advance AGI development. LiquidAI introduced three new MoE models (1B, 3B, 40B) with a 32k context window and efficient token handling. OpenAI released Whisper V3 Turbo, an open-source multilingual model with significant speed improvements. Meta AI FAIR is hiring research interns focusing on LLM reasoning, alignment, synthetic data, and novel architectures. Cohere partnered with Fujitsu to launch Takane, a custom Japanese model. Technical discussions included challenges in LoRA fine-tuning, float8 quantization in Keras, and new tools like create-llama for agent templates. Industry commentary raised concerns about AI development priorities and highlighted freelancing opportunities in AI.
not much happened today
llama-3 o1 deepseek-2.5 gpt-4 claude-3.5-sonnet 3dtopia-xl cogvideox anthropic meta-ai-fair openai deepseek-ai llamaindex langchainai retrieval-augmented-generation prompt-caching multimodality multi-agent-systems reasoning diffusion-models image-to-video prompting enterprise-ai agentic-ai long-context model-evaluation caching model-cost-efficiency
Anthropic introduced a RAG technique called Contextual Retrieval that reduces retrieval failure rates by 67% using prompt caching. Meta is teasing multimodal Llama 3 ahead of Meta Connect. OpenAI is hiring for a multi-agent research team focusing on improved AI reasoning with their o1 models, which have sparked mixed reactions. DeepSeek 2.5 is noted as a cost-effective alternative to GPT-4 and Claude 3.5 sonnet. New models like 3DTopia-XL for 3D asset generation and CogVideoX for image-to-video conversion were highlighted. Techniques to boost reasoning by re-reading questions and combining retrieval with prompt caching were shared. Industry insights emphasize the necessity of AI adoption in enterprises and the disruption of traditional ML businesses. Tools like LangChainAI's LangGraph Templates and LlamaIndex's LlamaParse Premium enhance agentic applications and multimodal content extraction. Discussions on LLM evals and caching highlight production challenges and improvements. "Companies not allowing developers to use AI are unlikely to succeed" was a key sentiment.
Gemini Live
gemini-1.5-pro genie falcon-mamba gemini-1.5 llamaindex google anthropic tii supabase perplexity-ai llamaindex openai hugging-face multimodality benchmarking long-context retrieval-augmented-generation open-source model-releases model-integration model-performance software-engineering linear-algebra hugging-face-hub debugging omarsar0 osanseviero dbrxmosaicai alphasignalai perplexity_ai _jasonwei svpino
Google launched Gemini Live on Android for Gemini Advanced subscribers during the Pixel 9 event, featuring integrations with Google Workspace apps and other Google services. The rollout began on 8/12/2024, with iOS support planned. Anthropic released Genie, an AI software engineering system achieving a 57% improvement on SWE-Bench. TII introduced Falcon Mamba, a 7B attention-free open-access model scalable to long sequences. Benchmarking showed that longer context lengths do not always improve Retrieval-Augmented Generation. Supabase launched an AI-powered Postgres service dubbed the "ChatGPT of databases," fully open source. Perplexity AI partnered with Polymarket to integrate real-time probability predictions into search results. A tutorial demonstrated a multimodal recipe recommender using Qdrant, LlamaIndex, and Gemini. An OpenAI engineer shared success tips emphasizing debugging and hard work. The connection between matrices and graphs in linear algebra was highlighted for insights into nonnegative matrices and strongly connected components. Keras 3.5.0 was released with Hugging Face Hub integration for model saving and loading.
a quiet weekend
sam-2 qwen2-math gpt-4 claude-3.5 figure deepmind boston-dynamics alibaba llamaindex robotics object-segmentation real-time-processing disease-prediction speech-recognition cli-tools model-performance adcock_brett rasbt hamel-husain rohanpaul_ai
Figure unveiled Figure 02, claimed as the most advanced humanoid robot, operating autonomously at BMW's Plant Spartanburg. DeepMind developed a table tennis robot achieving 100% wins against beginners and 55% against intermediates. Boston Dynamics showcased the dexterity of its fully-electric Atlas robot performing pushups and burpees. An autonomous dental robot performed the world's first dental procedure on a human, reducing a 2-hour process to 15 minutes using a 3D volumetric scanner. SAM 2 was introduced as an open model for real-time object segmentation without custom adaptation. Alibaba released Qwen2-Math, outperforming GPT-4 and Claude 3.5 in math capabilities. A new Listening-While-Speaking Language Model (LSLM) enables simultaneous listening and speaking in real-time. Researchers developed a disease prediction AI with 95% accuracy for diseases like coronary artery disease, type 2 diabetes, and breast cancer. Tools like LlamaParse CLI and MLX Whisper package enhance PDF parsing and speech recognition, with the latter running 40X faster than realtime on M1 Max. The news highlights significant advancements in robotics, AI models, and practical AI tools.
not much happened today
qwen2-math-72b gpt-4o claude-3.5-sonnet gemini-1.5-pro llama-3.1-405b idefics3-llama-8b anthropic google mistral-ai llamaindex math fine-tuning synthetic-data reinforcement-learning bug-bounty visual-question-answering open-source retrieval-augmented-generation agentic-ai ai-safety policy rohanpaul_ai anthropicai mervenoyann jeremyphoward omarsar0 ylecun bindureddy
Qwen2-Math-72B outperforms GPT-4o, Claude-3.5-Sonnet, Gemini-1.5-Pro, and Llama-3.1-405B on math benchmarks using synthetic data and advanced optimization techniques. Google AI cuts pricing for Gemini 1.5 Flash by up to 78%. Anthropic expands its bug bounty program targeting universal jailbreaks in next-gen safety systems. Tutorial on QLoRA fine-tuning of IDEFICS3-Llama 8B for visual question answering released. A Chinese open weights model surpasses previous MATH benchmark records. Surveys on Mamba models and LLM-based agents for software engineering highlight advancements and applications. Open-source tools like R2R RAG engine and LlamaIndex Workflows simplify building complex AI applications. Mistral AI introduces customizable AI agents. Concerns raised about California bill SB 1047's focus on existential risk and debates on banning open-source AI. Memes and humor continue in AI communities.
Too Cheap To Meter: AI prices cut 50-70% in last 30 days
gpt-4o gpt-4o-mini llama-3-1-405b mistral-large-2 gemini-1.5-flash deepseek-v2 sonnet-3.5 exaone-3.0 minicpm-v-2.6 claude-3.5 gpt-4o-2024-08-06 llamaindex together-ai deepinfra deepseek-ai mistral-ai google-deepmind lg-ai-research llamaindex llamaindex llamaindex price-cuts context-caching instruction-tuning vision benchmarks pytorch attention-mechanisms reinforcement-learning-from-human-feedback compute-optimal-scaling rohanpaul_ai akhaliq mervenoyann sophiamyang chhillee karpathy
Gemini 1.5 Flash has cut prices by approximately 70%, offering a highly competitive free tier of 1 million tokens per minute at $0.075/mtok, intensifying the AI model price war. Other significant price reductions include GPT-4o (~50% cut to $2.50/mtok), GPT-4o mini (70-98.5% cut to $0.15/mtok), Llama 3.1 405b (46% cut to $2.7/mtok), and Mistral Large 2 (62% cut to $3/mtok). Deepseek v2 introduced context caching, reducing input token costs by up to 90% to $0.014/mtok. New model releases include Llama 3.1 405b, Sonnet 3.5, EXAONE-3.0 (7.8B instruction-tuned by LG AI Research), and MiniCPM V 2.6 (vision-language model combining SigLIP 400M and Qwen2-7B). Benchmarks show Mistral Large performing well on ZebraLogic and Claude-3.5 leading LiveBench. FlexAttention, a new PyTorch API, simplifies and optimizes attention mechanisms. Andrej Karpathy analyzed RLHF, highlighting its limitations compared to traditional reinforcement learning. Google DeepMind research on compute-optimal scaling was also summarized.
Gemma 2 tops /r/LocalLlama vibe check
gemma-2-9b gemma-2-27b llama-3 mistral-7b phi-3 qwen gemma llamaindex mistral-ai cohere deepseek-ai nous-research eureka-labs model-comparison local-llms multilinguality model-efficiency fine-tuning ai-education ai-teaching-assistants andrej-karpathy
Gemma 2 (9B, 27B) is highlighted as a top-performing local LLM, praised for its speed, multilingual capabilities, and efficiency on consumer GPUs like the 2080ti. It outperforms models like Llama 3 and Mistral 7B in various tasks, including non-English text processing and reasoning. The community discussion on /r/LocalLlama reflects strong preference for Gemma 2, with 18 mentions, compared to 10 mentions for Llama 3 and 9 mentions for Mistral. Other models like Phi 3 and Qwen also received mentions but are considered surpassed by Gemma 2. Additionally, Andrej Karpathy announced the launch of Eureka Labs, an AI+Education startup aiming to create an AI-native school with AI Teaching Assistants, starting with the LLM101n course to teach AI training fundamentals. This initiative is seen as a significant development in AI education.
Mozilla's AI Second Act
llama-3 claude-3-opus gemini-1.5 deepseek-coder-v2 gpt-4 mozilla llamaindex anthropic etched-ai sohu deepseek openai vector-search inference-speed hardware-benchmarks context-windows open-source-models coding reasoning model-benchmarking gpu-inference agentic-ai justine-tunney stephen-hood tim-dettmers bindureddy
Mozilla showcased detailed live demos of llamafile and announced sqlite-vec for vector search integration at the AIE World's Fair. LlamaIndex launched llama-agents. Anthropic introduced new UI features and Projects for Claude with a 200K context window. Etched AI revealed a specialized inference chip claiming 500k tokens/sec, though benchmark claims are questioned. Sohu chip enables 15 agent trajectories/sec. Tim Dettmers shared theoretical GPU inference limits of ~300k tokens/sec for 8xB200 NVLink on 70B Llama. Deepseek Coder v2 outperforms Gemini and GPT-4 variants in coding and reasoning. The PyTorch documentary launched to little attention.
Nemotron-4-340B: NVIDIA's new large open models, built on syndata, great for syndata
nemotron-4-340b mixtral llama-3 gemini-1.5 gpt-4o mamba-2-hybrid-8b samba-3.8b-instruct dolphin-2.9.3 faro-yi-9b-dpo nvidia hugging-face mistral-ai llamaindex cohere gemini mistral synthetic-data model-alignment reward-models fine-tuning long-context model-scaling inference-speed mixture-of-agents open-source-models model-training instruction-following context-windows philipp-schmid bryan-catanzaro oleksii-kuchaiev rohanpaul_ai cognitivecompai _philschmid 01ai_yi
NVIDIA has scaled up its Nemotron-4 model from 15B to a massive 340B dense model, trained on 9T tokens, achieving performance comparable to GPT-4. The model alignment process uses over 98% synthetic data, with only about 20K human-annotated samples for fine-tuning and reward model training. The synthetic data generation pipeline is open-sourced, including synthetic prompts and preference data generation. The base and instruct versions outperform Mixtral and Llama 3, while the reward model ranks better than Gemini 1.5, Cohere, and GPT-4o. Other notable models include Mamba-2-Hybrid 8B, which is up to 8x faster than Transformers and excels on long-context tasks, Samba-3.8B-instruct for infinite context length with linear complexity, Dolphin-2.9.3 tiny models optimized for low-resource devices, and Faro Yi 9B DPO with a 200K context window running efficiently on 16GB VRAM. The Mixture-of-Agents technique boosts open-source LLMs beyond GPT-4 Omni on AlpacaEval 2.0.
ALL of AI Engineering in One Place
claude-3-sonnet claude-3 openai google-deepmind anthropic mistral-ai cohere hugging-face adept midjourney character-ai microsoft amazon nvidia salesforce mastercard palo-alto-networks axa novartis discord twilio tinder khan-academy sourcegraph mongodb neo4j hasura modular cognition anysphere perplexity-ai groq mozilla nous-research galileo unsloth langchain llamaindex instructor weights-biases lambda-labs neptune datastax crusoe covalent qdrant baseten e2b octo-ai gradient-ai lancedb log10 deepgram outlines crew-ai factory-ai interpretability feature-steering safety multilinguality multimodality rag evals-ops open-models code-generation gpus agents ai-leadership
The upcoming AI Engineer World's Fair in San Francisco from June 25-27 will feature a significantly expanded format with booths, talks, and workshops from top model labs like OpenAI, DeepMind, Anthropic, Mistral, Cohere, HuggingFace, and Character.ai. It includes participation from Microsoft Azure, Amazon AWS, Google Vertex, and major companies such as Nvidia, Salesforce, Mastercard, Palo Alto Networks, and more. The event covers 9 tracks including RAG, multimodality, evals/ops, open models, code generation, GPUs, agents, AI in Fortune 500, and a new AI leadership track. Additionally, Anthropic shared interpretability research on Claude 3 Sonnet, revealing millions of interpretable features that can be steered to modify model behavior, including safety-relevant features related to bias and unsafe content, though more research is needed for practical applications. The event offers a discount code for AI News readers.
Snowflake Arctic: Fully Open 10B+128x4B Dense-MoE Hybrid LLM
snowflake-arctic phi-3 llama-3-70b llama-3 stable-diffusion-3 sd3-turbo gpt-3.5-turbo snowflake databricks deepseek deepspeed nvidia stable-diffusion adobe apple llamaindex lmsys openai mixture-of-experts curriculum-learning model-release image-generation video-upscaling quantization inference-speed benchmarking model-comparison open-source on-device-ai
Snowflake Arctic is a notable new foundation language model released under Apache 2.0, claiming superiority over Databricks in data warehouse AI applications and adopting a mixture-of-experts architecture inspired by DeepSeekMOE and DeepSpeedMOE. The model employs a 3-stage curriculum training strategy similar to the recent Phi-3 paper. In AI image and video generation, Nvidia introduced the Align Your Steps technique improving image quality at low step counts, while Stable Diffusion 3 and SD3 Turbo models were compared for prompt understanding and image quality. Adobe launched an AI video upscaling project enhancing blurry videos to HD, though with some high-resolution artifacts. Apple released open-source on-device language models with code and training logs, diverging from typical weight-only releases. The Llama-3-70b model ties for first place on the LMSYS leaderboard for English queries, and Phi-3 (4B params) outperforms GPT-3.5 Turbo in the banana logic benchmark. Fast inference and quantization of Llama 3 models were demonstrated on MacBook devices.
OpenAI's Instruction Hierarchy for the LLM OS
phi-3-mini openelm claude-3-opus gpt-4-turbo gpt-3.5-turbo llama-3-70b rho-1 mistral-7b llama-3-8b llama-3 openai microsoft apple deepseek mistral-ai llamaindex wendys prompt-injection alignment benchmarking instruction-following context-windows model-training model-deployment inference performance-optimization ai-application career-advice drive-thru-ai
OpenAI published a paper introducing the concept of privilege levels for LLMs to address prompt injection vulnerabilities, improving defenses by 20-30%. Microsoft released the lightweight Phi-3-mini model with 4K and 128K context lengths. Apple open-sourced the OpenELM language model family with an open training and inference framework. An instruction accuracy benchmark compared 12 models, with Claude 3 Opus, GPT-4 Turbo, and Llama 3 70B performing best. The Rho-1 method enables training state-of-the-art models using only 3% of tokens, boosting models like Mistral. Wendy's deployed AI-powered drive-thru ordering, and a study found Gen Z workers prefer generative AI for career advice. Tutorials on deploying Llama 3 models on AWS EC2 highlight hardware requirements and inference server use.
Evals-based AI Engineering
jamba bamboo qwen-1.5-moe grok-1.5 llama2-7b openai mistral-ai x-ai llamaindex evaluation fine-tuning prompt-engineering voice-cloning quantization model-optimization code-generation context-windows hamel-husain alec-radford
Hamel Husain emphasizes the importance of comprehensive evals in AI product development, highlighting evaluation, debugging, and behavior change as key iterative steps. OpenAI released a voice engine demo showcasing advanced voice cloning from small samples, raising safety concerns. Reddit discussions introduced new models like Jamba (hybrid Transformer-SSM with MoE), Bamboo (7B LLM with high sparsity based on Mistral), Qwen1.5-MoE (efficient parameter activation), and Grok 1.5 (128k context length, surpassing GPT-4 in code generation). Advances in quantization include 1-bit Llama2-7B models outperforming full precision and the QLLM quantization toolbox supporting GPTQ/AWQ/HQQ methods.
Inflection-2.5 at 94% of GPT4, and Pi at 6m MAU
inflection-2.5 claude-3-sonnet claude-3-opus gpt-4 yi-9b mistral inflection anthropic perplexity-ai llamaindex mistral-ai langchain retrieval-augmented-generation benchmarking ocr structured-output video-retrieval knowledge-augmentation planning tool-use evaluation code-benchmarks math-benchmarks mustafa-suleyman amanda-askell jeremyphoward abacaj omarsar0
Mustafa Suleyman announced Inflection 2.5, which achieves more than 94% the average performance of GPT-4 despite using only 40% the training FLOPs. Pi's user base is growing about 10% weekly, with new features like realtime web search. The community noted similarities between Inflection 2.5 and Claude 3 Sonnet. Claude 3 Opus outperformed GPT-4 in a 1.5:1 vote and is now the default for Perplexity Pro users. Anthropic added experimental tool calling support for Claude 3 via LangChain. LlamaIndex released LlamaParse JSON Mode for structured PDF parsing and added video retrieval via VideoDB, enabling retrieval-augmented generation (RAG) pipelines. A paper proposed knowledge-augmented planning for LLM agents. New benchmarks like TinyBenchmarks and the Yi-9B model release show strong code and math performance, surpassing Mistral.
Not much happened today
claude-3 claude-3-opus claude-3-sonnet gpt-4 gemma-2b anthropic perplexity langchain llamaindex cohere accenture mistral-ai snowflake together-ai hugging-face european-space-agency google gpt4all multimodality instruction-following out-of-distribution-reasoning robustness enterprise-ai cloud-infrastructure open-datasets model-deployment model-discoverability generative-ai image-generation
Anthropic released Claude 3, replacing Claude 2.1 as the default on Perplexity AI, with Claude 3 Opus surpassing GPT-4 in capability. Debate continues on whether Claude 3's performance stems from emergent properties or pattern matching. LangChain and LlamaIndex added support for Claude 3 enabling multimodal and tool-augmented applications. Despite progress, current models still face challenges in out-of-distribution reasoning and robustness. Cohere partnered with Accenture for enterprise AI search, while Mistral AI and Snowflake collaborate to provide LLMs on Snowflake's platform. Together AI Research integrates Deepspeed innovations to accelerate generative AI infrastructure. Hugging Face and the European Space Agency released a large earth observation dataset, and Google open sourced Gemma 2B, optimized for smartphones via the MLC-LLM project. GPT4All improved model discoverability for open models. The AI community balances excitement over new models with concerns about limitations and robustness, alongside growing enterprise adoption and open-source contributions. Memes and humor continue to provide social commentary.
Stable Diffusion 3 — Rombach & Esser did it again!
stable-diffusion-3 claude-3 orca dolphincoder-starcoder2-15b stability-ai anthropic microsoft latitude perplexity-ai llamaindex tripo-ai diffusion-models multimodality benchmarking human-evaluation text-generation image-generation 3d-modeling fine-tuning roleplay coding dataset-release soumith-chintala bill-peebles swyx kevinafischer jeremyphoward akhaliq karinanguyen_ aravsrinivas
Over 2500 new community members joined following Soumith Chintala's shoutout, highlighting growing interest in SOTA LLM-based summarization. The major highlight is the detailed paper release of Stable Diffusion 3 (SD3), showcasing advanced text-in-image control and complex prompt handling, with the model outperforming other SOTA image generation models in human-evaluated benchmarks. The SD3 model is based on an enhanced Diffusion Transformer architecture called MMDiT. Meanwhile, Anthropic released Claude 3 models, noted for human-like responses and emotional depth, scoring 79.88% on HumanEval but costing over twice as much as GPT-4. Microsoft launched new Orca-based models and datasets, and Latitude released DolphinCoder-StarCoder2-15b with strong coding capabilities. Integration of image models by Perplexity AI and 3D CAD generation by PolySpectra powered by LlamaIndex were also highlighted. "SD3's win rate beats all other SOTA image gen models (except perhaps Ideogram)" and "Claude 3 models are very good at generating d3 visualizations from text descriptions."
Welcome Interconnects and OpenRouter
mistral-large miqu mixtral gpt-4 mistral-7b mistral-ai openai perplexity-ai llamaindex qwen langchain model-comparison model-optimization quantization role-playing story-writing code-clarity ai-assisted-decompilation asynchronous-processing quantum-computing encoder-based-diffusion open-source hardware-experimentation rag-systems nathan-lambert alex-atallah
Discord communities analyzed 22 guilds, 349 channels, and 12885 messages revealing active discussions on model comparisons and optimizations involving Mistral AI, Miqu, and GGUF quantized models. Highlights include comparing Mistral Large with GPT-4, focusing on cost-effectiveness and performance, and exploring quantization techniques like GPTQ and QLORA to reduce VRAM usage. Advanced applications such as role-playing, story-writing, code clarity, and AI-assisted decompilation were emphasized, alongside development of tools like an asynchronous summarization script for Mistral 7b. The intersection of quantum computing and AI was discussed, including DARPA-funded projects and encoder-based diffusion techniques for image processing. Community efforts featured new Spanish LLM announcements, hardware experimentation, and open-source initiatives, with platforms like Perplexity AI and LlamaIndex noted for innovation and integration. Speculation about Mistral AI's open-source commitment and tools like R2R for rapid RAG deployment highlighted collaborative spirit.
RWKV "Eagle" v5: Your move, Mamba
rwkv-v5 mistral-7b miqu-1-70b mistral-medium llama-2 mistral-instruct-v0.2 mistral-tuna llama-2-13b kunoichi-dpo-v2-7b gpt-4 eleutherai mistral-ai hugging-face llamaindex nous-research rwkv lmsys fine-tuning multilinguality rotary-position-embedding model-optimization model-performance quantization speed-optimization prompt-engineering model-benchmarking reinforcement-learning andrej-karpathy
RWKV v5 Eagle was released with better-than-mistral-7b evaluation results, trading some English performance for multilingual capabilities. The mysterious miqu-1-70b model sparked debate about its origins, possibly a leak or distillation of Mistral Medium or a fine-tuned Llama 2. Discussions highlighted fine-tuning techniques, including the effectiveness of 1,000 high-quality prompts over larger mixed-quality datasets, and tools like Deepspeed, Axolotl, and QLoRA. The Nous Research AI community emphasized the impact of Rotary Position Embedding (RoPE) theta settings on LLM extrapolation, improving models like Mistral Instruct v0.2. Speed improvements in Mistral Tuna kernels reduced token processing costs, enhancing efficiency. The launch of Eagle 7B with 7.52B parameters showcased strong multilingual performance, surpassing other 7B class models.
1/6-7/2024: LlaMA Pro - an alternative to PEFT/RAG??
llama-3 llama-3-1-1b llama-3-8-3b gpt-4 gpt-3.5 dall-e openai mistral-ai llamaindex langchain fine-tuning model-expansion token-limits privacy multilinguality image-generation security custom-models model-training yannic-kilcher
New research papers introduce promising Llama Extensions including TinyLlama, a compact 1.1B parameter model pretrained on about 1 trillion tokens for 3 epochs, and LLaMA Pro, an 8.3B parameter model expanding LLaMA2-7B with additional training on 80 billion tokens of code and math data. LLaMA Pro adds layers to avoid catastrophic forgetting and balances language and code tasks but faces scrutiny for not using newer models like Mistral or Qwen. Meanwhile, OpenAI Discord discussions reveal insights on GPT-4 token limits, privacy reassurances, fine-tuning for GPT-3.5, challenges with multi-language image recognition, custom GPT creation requiring ChatGPT Plus, and security concerns in GPT deployment. Users also share tips on dynamic image generation with DALL-E and logo creation.
12/14/2023: $1e7 for Superalignment
gemini bard gpt-4 gpt-4.5 llama-2 openai llamaindex perplexity-ai prompt-engineering api custom-gpt json bug-fixes chatbots performance tts code-generation image-recognition jan-leike patrick-collison
Jan Leike is launching a new grant initiative inspired by Patrick Collison's Fast Grants to support AI research. OpenAI introduced a new developers Twitter handle @OpenAIDevs for community updates. Discussions on OpenAI's Gemini and Bard chatbots highlight their ability to read each other's instructions and offer unique coding solutions. Users reported various issues with GPT-4, including performance problems, customization difficulties, and a resolved bug in image recognition. There are ongoing conversations about prompt engineering challenges and new JSON mode support in Convo-lang for API use. Concerns about misuse of chatbots for illegal activities and alternatives like Llama2 models and the Perplexity chatbot were also discussed.