All tags
Topic: "code-generation"
not much happened today
deepseek-r1-0528 pali-gemma-2 gemma-3 shieldgemma-2 txgemma gemma-3-qat gemma-3n-preview medgemma dolphingemma signgemma claude-4 opus-4 claude-sonnet-4 codestral-embed bagel qwen nemotron-cortexa gemini-2.5-pro deepseek-ai huggingface gemma claude bytedance qwen nemotron sakana-ai-labs benchmarking model-releases multimodality code-generation model-performance long-context reinforcement-learning model-optimization open-source yuchenj_uw _akhaliq clementdelangue osanseviero alexalbert__ guillaumelample theturingpost lmarena_ai epochairesearch scaling01 nrehiew_ ctnzr
DeepSeek R1 v2 model released with availability on Hugging Face and inference partners. The Gemma model family continues prolific development including PaliGemma 2, Gemma 3, and others. Claude 4 and its variants like Opus 4 and Claude Sonnet 4 show top benchmark performance, including new SOTA on ARC-AGI-2 and WebDev Arena. Codestral Embed introduces a 3072-dimensional code embedder. BAGEL, an open-source multimodal model by ByteDance, supports reading, reasoning, drawing, and editing with long mixed contexts. Benchmarking highlights include Nemotron-CORTEXA topping SWEBench and Gemini 2.5 Pro performing on VideoGameBench. Discussions on random rewards effectiveness focus on Qwen models. "Opus 4 NEW SOTA ON ARC-AGI-2. It's happening - I was right" and "Claude 4 launch has dev moving at a different pace" reflect excitement in the community.
not much happened today
chatgpt o3 o4 bagel-7b medgemma acereason-nemotron-14b codex gemini openai bytedance google nvidia sakana-ai-labs deep-learning-ai gemini agenticseek anthropic agentic-systems multimodality reasoning code-generation prompt-engineering privacy ethical-ai emergence synthetic-data speech-instruction-tuning low-resource-languages humor scaling01 mervenoyann sakananailabs _philschmid omarsar0 teortaxestex andrewlampinen sedielem cis_female
OpenAI plans to evolve ChatGPT into a super-assistant by 2025 with models like o3 and o4 enabling agentic tasks and supporting a billion users. Recent multimodal and reasoning model releases include ByteDance's BAGEL-7B, Google's MedGemma, and NVIDIA's ACEReason-Nemotron-14B. The Sudoku-Bench Leaderboard highlights ongoing challenges in AI creative reasoning. In software development, OpenAI's Codex aids code generation and debugging, while Gemini's Context URL tool enhances prompt context. AgenticSeek offers a local, privacy-focused alternative for autonomous agents. Ethical concerns are raised about AGI development priorities and Anthropic's alignment with human values. Technical discussions emphasize emergence in AI and training challenges, with humor addressing misconceptions about Gemini 3.0 and async programming in C. A novel synthetic speech training method enables instruction tuning of LLMs without real speech data, advancing low-resource language support.
OpenAI buys Jony Ive's io for $6.5b, LMArena lands $100m seed from a16z
gemini-2.5-pro gemini-diffusion openai lmarena a16z mistral-ai google google-deepmind multimodality reasoning code-generation math model-fine-tuning ai-assistants voice memory-optimization sundar_pichai
OpenAI confirmed a partnership with Jony Ive to develop consumer hardware. LMArena secured a $100 million seed round from a16z. Mistral launched a new code model fine-tune. Google DeepMind announced multiple updates at Google I/O 2024, including over a dozen new models and 20 AI products. Key highlights include the release of Gemini 2.5 Pro and Gemini Diffusion, featuring advanced multimodal reasoning, coding, and math capabilities, and integration of Gemini in Google Chrome as an AI browsing assistant. Deep Think enhanced reasoning mode and Project Astra improvements were also introduced, focusing on voice output, memory, and computer control for a universal AI assistant.
not much happened today
open-code-reasoning-32b open-code-reasoning-14b open-code-reasoning-7b mistral-medium-3 llama-4-maverick gemini-2.5-pro gemini-2.5-flash claude-3.7-sonnet absolute-zero-reasoner x-reasoner fastvlm parakeet-asr openai nvidia mistral-ai google apple huggingface reinforcement-learning fine-tuning code-generation reasoning vision on-device-ai model-performance dataset-release model-optimization reach_vb artificialanlys scaling01 iscienceluvr arankomatsuzaki awnihannun risingsayak
OpenAI launched both Reinforcement Finetuning and Deep Research on GitHub repos, drawing comparisons to Cognition's DeepWiki. Nvidia open-sourced Open Code Reasoning models (32B, 14B, 7B) with Apache 2.0 license, showing 30% better token efficiency and compatibility with llama.cpp, vLLM, transformers, and TGI. Independent evaluations highlight Mistral Medium 3 rivaling Llama 4 Maverick, Gemini 2.0 Flash, and Claude 3.7 Sonnet in coding and math reasoning, priced significantly lower but no longer open-source. Google's Gemini 2.5 Pro is noted as their most intelligent model with improved coding from simple prompts, while Gemini 2.5 Flash incurs a 150x cost increase over Gemini 2.0 Flash due to higher token usage and cost. The Absolute Zero Reasoner (AZR) achieves SOTA performance in coding and math reasoning via reinforced self-play without external data. Vision-language model X-REASONER is post-trained on general-domain text for reasoning. Apple ML research released FastVLM with on-device iPhone demo. HiDream LoRA trainer supports QLoRA fine-tuning under memory constraints. Nvidia's Parakeet ASR model tops Hugging Face ASR leaderboard with MLX implementation. New datasets SwallowCode and SwallowMath boost LLM performance in math and code. Overall, a quiet day with significant model releases and performance insights.
not much happened today
grok-3 grok-3-mini gpt-4.5 claude-3.7-sonnet quasar-alpha optimus-alpha gpt-4.1 kaleidoscope internvl3 internvit qwen2.5vl transmamba fantasytalking openai alibaba cmu reinforcement-learning reasoning benchmarks vision multilinguality multimodality transformers attention-mechanisms agents code-generation model-performance rasbt sarahookr mervenoyann gneubig svpino mathemagic1an
The AI news recap highlights independent evaluations showing Grok-3 outperforming models like GPT-4.5 and Claude 3.7 Sonnet on reasoning benchmarks, while Grok-3 mini excels in reasoning tasks. Research on reinforcement learning (RL) fine-tuning reveals potential improvements for small reasoning models but also notes instability in reported gains. Benchmark results suggest Quasar Alpha and Optimus Alpha may be versions of GPT-4.1. Vision and multimodal models like Kaleidoscope, supporting 18 languages, and InternVL3, built on InternViT and Qwen2.5VL, demonstrate advances in multilingual vision and reasoning. The fusion model TransMamba combines transformer precision with speed via SSM mechanisms. Alibaba's FantasyTalking generates realistic talking portraits. Agent-focused events at CMU and tools like FilmAgent AI for virtual film production and BrowseComp benchmark for browsing agents were announced. The coding assistant Augment supports multiple IDEs with code analysis and suggestions. Discussions also covered Google’s new agent-to-agent protocol concept.
DeepCoder: A Fully Open-Source 14B Coder at O3-mini Level
deepcoder-14b o3-mini o1 gemini-2.5-pro kimi-vl-a3b gpt-4o llama-4-scout maverick behemoth gen-4-turbo imagen-3 together-ai agentica opena bytedance google-deepmind moonshot-ai meta-ai-fair runway open-source reinforcement-learning code-generation multimodality model-training mixture-of-experts l2-normalization image-generation model-performance context-windows philschmid lepikhin reach_vb akhaliq yuchenj_uw epochairesearch danielhanchen c_valenzuelab
Together AI and Agentica released DeepCoder-14B, an open-source 14B parameter coding model rivaling OpenAI's o3-mini and o1 on coding benchmarks, trained with an open-source RL framework from ByteDance and costing about $26,880. Google DeepMind launched Gemini 2.5 Pro with experimental "Flash" versions available to subscribers. Moonshot AI introduced Kimi-VL-A3B, a multimodal model with 128K context outperforming gpt-4o on vision and math benchmarks. Meta AI released Llama 4 Scout and Maverick, with a larger Behemoth model in training, featuring mixture-of-experts and L2 norm techniques. Runway launched Gen-4 Turbo with 10x better results than Gen-3 at the same cost. Google announced Imagen 3, a high-quality text-to-image model now in Vertex AI, enabling easier object removal. The report highlights open-source contributions, reinforcement learning training optimizations, and significant model performance improvements across coding, multimodal, and image generation domains.
not much happened today
gemini-2.5-pro chatgpt deepseek-v3 qwen-2.5 claude-3.5-sonnet claude-3.7-sonnet google anthropic openai llama_index langchain runway deepseek math benchmarking chains-of-thought model-performance multi-agent-systems agent-frameworks media-generation long-horizon-planning code-generation rasbt danielhanchen hkproj
Gemini 2.5 Pro shows strengths and weaknesses, notably lacking LaTex math rendering unlike ChatGPT, and scored 24.4% on the 2025 US AMO. DeepSeek V3 ranks 8th and 12th on recent leaderboards. Qwen 2.5 models have been integrated into the PocketPal app. Research from Anthropic reveals that Chains-of-Thought (CoT) reasoning is often unfaithful, especially on harder tasks, raising safety concerns. OpenAI's PaperBench benchmark shows AI agents struggle with long-horizon planning, with Claude 3.5 Sonnet achieving only 21.0% accuracy. CodeAct framework generalizes ReAct for dynamic code writing by agents. LangChain explains multi-agent handoffs in LangGraph. Runway Gen-4 marks a new phase in media creation.
DeepSeek's Open Source Stack
qwen-qwq-32b start character-3 gemini gemini-2.0 mercury-coder gpt-4.5 jamba-mini-1.6 gemini-2.0-flash gpt-4o-mini mistral-small-3 mistral-ocr deepseek pyspur hugging-face togethercompute hedra-labs google-deepmind deeplearningai openai ai21-labs mistral-ai fine-tuning benchmarking multimodality code-generation diffusion-models model-performance model-optimization ocr embedding-models context-windows runtime-limits _akhaliq lmarena_ai reach_vb danielhanchen _philschmid aidan_mclau vikhyatk jerryjliu0
DeepSeek's Open Source Week was summarized by PySpur, highlighting multiple interesting releases. The Qwen QwQ-32B model was fine-tuned into START, excelling in PhD-level science QA and math benchmarks. Character-3, an omnimodal AI video generation model by Hedra Labs and Together AI, enables realistic animated content creation. Google DeepMind introduced the Gemini embedding model with an 8k context window, ranking #1 on MMTEB, alongside the Gemini 2.0 Code Executor supporting Python libraries and auto-fix features. Inception Labs' Mercury Coder is a diffusion-based code generation model offering faster token processing. OpenAI released GPT-4.5, their largest model yet but with less reasoning ability than some competitors. AI21 Labs launched Jamba Mini 1.6, noted for superior output speed compared to Gemini 2.0 Flash, GPT-4o mini, and Mistral Small 3. A new dataset of 1.9M scanned pages was released for OCR benchmarking, with Mistral OCR showing competitive but not top-tier document parsing performance compared to LLM/LVM-powered methods. "Cracked engineers are all you need."
Genesis: Generative Physics Engine for Robotics (o1-mini version)
o1 o1-preview gpt-4o claude-3.5-sonnet gemini-2.0-pro llama-3-3b llama-3-70b openai google-deepmind meta-ai-fair hugging-face function-calling structured-outputs vision performance-benchmarks sdk webrtc reasoning math code-generation transformer-architecture model-training humanoid-robots search model-efficiency dataset-sharing aidan_mclau sundarpichai adcock_brett
OpenAI launched the o1 model API featuring function calling, structured outputs, vision support, and developer messages, achieving 60% fewer reasoning tokens than its preview. The model excels in math and code with a 0.76 LiveBench Coding score, outperforming Sonnet 3.5. Beta SDKs for Go and Java and WebRTC support with 60% lower prices were also released. Google Gemini 2.0 Pro (Gemini Exp 1206) deployment accelerated, showing improved coding, math, and reasoning performance. Meta AI FAIR introduced research on training transformers directly on raw bytes using dynamic entropy-based patching. Commercial humanoid robots were successfully deployed by an industry player. Hugging Face researchers demonstrated that their 3B Llama model can outperform the 70B Llama model on MATH-500 accuracy using search techniques, highlighting efficiency gains with smaller models. Concerns about reproducibility and domain-specific limitations were noted.
Common Corpus: 2T Open Tokens with Provenance
qwen-2.5-coder claude-3.5-sonnet janusflow-1.3b ocronos-vintage pleais huggingface langchainai deepseek alibaba anthropic provenance ocr multilingual-datasets prompt-engineering multimodality image-generation code-generation quantization model-scaling inference-efficiency tim-dettmers tom-doerr omarsar0 swyx madiator reach_vb
Pleais via Huggingface released Common Corpus, the largest fully open multilingual dataset with over 2 trillion tokens including detailed provenance information. They also introduced OCRonos-Vintage, a 124M-parameter OCR correction model that efficiently fixes digitization errors on CPU and GPU, unlocking knowledge from PDFs. On AI tools, LangChainAI launched Prompt Canvas for collaborative prompt engineering, while DeepSeek released JanusFlow 1.3B, a unified multimodal LLM integrating autoregressive and rectified flow models for enhanced image understanding and generation. Alibaba Cloud announced Qwen2.5-Coder, a code-focused LLM with advanced coding capabilities, and Claude 3.5 Sonnet was highlighted for superior code generation. Discussions on quantization challenges and scaling laws for precision by Tim Dettmers and others emphasized the impact of low-precision training on model scalability and inference efficiency. "Scaling Laws for Precision" paper insights and alternative efficiency methods were also noted.
BitNet was a lie?
qwen-2.5-coder-32b-instruct gpt-4o llama-3 sambanova alibaba hugging-face quantization scaling-laws model-efficiency fine-tuning model-performance code-generation open-source unit-testing ci-cd tanishq-kumar tim-dettmers
Scaling laws for quantization have been modified by a group led by Chris Re, analyzing over 465 pretraining runs and finding benefits plateau at FP6 precision. Lead author Tanishq Kumar highlights that longer training and more data increase sensitivity to quantization, explaining challenges with models like Llama-3. Tim Dettmers, author of QLoRA, warns that the era of efficiency gains from low-precision quantization is ending, signaling a shift from scaling to optimizing existing resources. Additionally, Alibaba announced Qwen 2.5-Coder-32B-Instruct, which matches or surpasses GPT-4o on coding benchmarks, and open-source initiatives like DeepEval for LLM testing are gaining traction.
FrontierMath: A Benchmark for Evaluating Advanced Mathematical Reasoning in AI
o1 claude-3.5-haiku gpt-4o epoch-ai openai microsoft anthropic x-ai langchainai benchmarking math moravecs-paradox mixture-of-experts chain-of-thought agent-framework financial-metrics-api pdf-processing few-shot-learning code-generation karpathy philschmid adcock_brett dylan522p
Epoch AI collaborated with over 60 leading mathematicians to create the FrontierMath benchmark, a fresh set of hundreds of original math problems with easy-to-verify answers, aiming to challenge current AI models. The benchmark reveals that all tested models, including o1, perform poorly, highlighting the difficulty of complex problem-solving and Moravec's paradox in AI. Key AI developments include the introduction of Mixture-of-Transformers (MoT), a sparse multi-modal transformer architecture reducing computational costs, and improvements in Chain-of-Thought (CoT) prompting through incorrect reasoning and explanations. Industry news covers OpenAI acquiring the chat.com domain, Microsoft launching the Magentic-One agent framework, Anthropic releasing Claude 3.5 Haiku outperforming gpt-4o on some benchmarks, and xAI securing 150MW grid power with support from Elon Musk and Trump. LangChain AI introduced new tools including a Financial Metrics API, Document GPT with PDF upload and Q&A, and LangPost AI agent for LinkedIn posts. xAI also demonstrated the Grok Engineer compatible with OpenAI and Anthropic APIs for code generation.
not much happened today
claudette llama-3-1 yi-lightning gpt-4o claude-3.5-sonnet answer-ai tencent notebooklm motherduck perplexity dropbox openai meta-ai-fair yi-ai zyphra-ai anthropic langchain openai synthetic-data fine-tuning sql audio-processing on-device-ai dataset-release transformer llm-reasoning ai-safety code-generation ai-pricing ai-job-market fchollet aravsrinivas svpino swyx
Answer.ai launched fastdata, a synthetic data generation library using
claudette
and Tencent's Billion Persona paper. NotebookLM became customizable, and Motherduck introduced notable LLMs in SQL implementations. Perplexity and Dropbox announced competitors to Glean. OpenAI unveiled audio chat completions priced at 24 cents per minute. Meta AI released Llama 3.1, powering Lenovo AI Now's on-device agent. Yi-Lightning model ranked #6 globally, surpassing GPT-4o. Zyphra AI released the large Zyda-2 dataset with 5 trillion tokens. François Chollet clarified transformer architecture as set-processing, not sequence-processing. Research suggests memorization aids LLM reasoning. Anthropic updated its Responsible Scaling Policy for AI safety. Tools like Perplexity Finance, Open Canvas by LangChain, and AlphaCodium code generation tool were highlighted. Approximately $500 million was raised for AI agent startups, with ongoing discussions on AI's job market impact. Combining prompt caching with the Batches API can yield a 95% discount on Claude 3.5 Sonnet tokens. Pixtral 12B: Mistral beats Llama to Multimodality
pixtral-12b mistral-nemo-12b llama-3-1-70b llama-3-1-8b deeps-eek-v2-5 gpt-4-turbo llama-3-1 strawberry claude mistral-ai meta-ai-fair hugging-face arcee-ai deepseek-ai openai anthropic vision multimodality ocr benchmarking model-release model-architecture model-performance fine-tuning model-deployment reasoning code-generation api access-control reach_vb devendra_chapilot _philschmid rohanpaul_ai
Mistral AI released Pixtral 12B, an open-weights vision-language model with a Mistral Nemo 12B text backbone and a 400M vision adapter, featuring a large vocabulary of 131,072 tokens and support for 1024x1024 pixel images. This release notably beat Meta AI in launching an open multimodal model. At the Mistral AI Summit, architecture details and benchmark performances were shared, showing strong OCR and screen understanding capabilities. Additionally, Arcee AI announced SuperNova, a distilled Llama 3.1 70B & 8B model outperforming Meta's Llama 3.1 70B instruct on benchmarks. DeepSeek released DeepSeek-V2.5, scoring 89 on HumanEval, surpassing GPT-4-Turbo, Opus, and Llama 3.1 in coding tasks. OpenAI plans to release Strawberry as part of ChatGPT soon, though its capabilities are debated. Anthropic introduced Workspaces for managing multiple Claude deployments with enhanced access controls.
Mistral Large 2 + RIP Mistral 7B, 8x7B, 8x22B
mistral-large-2 mistral-nemo-12b llama-3.1-8b llama-3.1-70b llama-3.1 llama-3-405b yi-34b-200k gpt-4o mistral-ai meta-ai-fair groq togethercompute code-generation math function-calling reasoning context-windows model-deprecation pretraining posttraining benchmarking
Mistral Large 2 introduces 123B parameters with Open Weights under a Research License, focusing on code generation, math performance, and a massive 128k context window, improving over Mistral Large 1's 32k context. It claims better function calling capabilities than GPT-4o and enhanced reasoning. Meanwhile, Meta officially released Llama-3.1 models including Llama-3.1-70B and Llama-3.1-8B with detailed pre-training and post-training insights. The Llama-3.1 8B model's 128k context performance was found underwhelming compared to Mistral Nemo and Yi 34B 200K. Mistral is deprecating older Apache open-source models, focusing on Large 2 and Mistral Nemo 12B. The news also highlights community discussions and benchmarking comparisons.
Llama 3.1: The Synthetic Data Model
llama-3-405b llama-3-1 llama-3 meta-ai-fair groq fireworks synthetic-data fine-tuning reinforcement-learning multilinguality long-context tool-use code-generation math model-licensing inference-speed model-deployment bindureddy thomas
Meta AI has released Llama 3.1, including a 405B parameter model that triggers regulatory considerations like the EU AI Act and SB 1047. The model incorporates extensive synthetic data techniques for code, math, multilinguality, long context, and tool use fine-tuning, with RLHF using synthetic preference data from Llama 2. The launch was coordinated across major inference providers, with Groq demonstrating 750 tokens per second inference speed and Fireworks leading in pricing. The updated license explicitly allows synthetic data generation, marking a significant step in open frontier-class LLMs and cost-efficiency improvements since March.
Llama 3.1 Leaks: big bumps to 8B, minor bumps to 70b, and SOTA OSS 405b model
llama-3-1-405b llama-3-8b llama-3-70b llama-3-1-8b gpt-4o gpt-4o-mini claude-3-5 qwen-2 meta-ai-fair openai alibaba multilinguality code-generation context-windows model-training synthetic-data benchmarking reasoning fine-tuning model-performance dataset-release swyx philschmid jjitsev lewtun teknium1 adcock_brett
Llama 3.1 leaks reveal a 405B dense model with 128k context length, trained on 39.3M GPU hours using H100-80GB GPUs, and fine-tuned with over 25M synthetic examples. The model shows significant benchmark improvements, especially for the 8B and 70B variants, with some evals suggesting the 70B outperforms GPT-4o. GPT-4o Mini launched as a cost-efficient variant with strong performance but some reasoning weaknesses. Synthetic datasets like NuminaMath enable models such as Alibaba Qwen 2 to surpass GPT-4o and Claude 3.5 in math competitions. Discussions include reasoning task benchmarks and dataset building for improved reasoning.
Test-Time Training, MobileLLM, Lilian Weng on Hallucination (Plus: Turbopuffer)
llama-2-7b codegeex4-all-9b mamba facebook-research meta-ai-fair tsinghua-university hallucination-detection anti-hallucination-methods on-device-ai model-architecture rnn long-context-modeling model-scaling expressive-hidden-states code-generation lilian-weng yann-lecun
Lilian Weng released a comprehensive literature review on hallucination detection and anti-hallucination methods including techniques like FactualityPrompt, SelfCheckGPT, and WebGPT. Facebook AI Research (FAIR) published MobileLLM, a sub-billion parameter on-device language model architecture achieving performance comparable to llama-2-7b with innovations like thin and deep models and shared weights. A new RNN-based LLM architecture with expressive hidden states was introduced, replacing attention mechanisms and scaling better than Mamba and Transformer models for long-context modeling. Additionally, Tsinghua University open sourced CodeGeeX4-ALL-9B, a multilingual code generation model excelling in code assistance.
Gemini Nano: 50-90% of Gemini Pro, <100ms inference, on device, in Chrome Canary
gemini-nano gemini-pro claude-3.5-sonnet gpt-4o deepseek-coder-v2 glm-0520 nemotron-4-340b gpt-4-turbo-0409 google gemini huggingface anthropic deepseek zhipu-ai tsinghua nvidia model-quantization prompt-api optimization model-weights benchmarking code-generation math synthetic-data automatic-differentiation retrieval-augmented-generation mitigating-memorization tree-search inference-time-algorithms adcock_brett dair_ai lmsysorg
The latest Chrome Canary now includes a feature flag for Gemini Nano, offering a prompt API and on-device optimization guide, with models Nano 1 and 2 at 1.8B and 3.25B parameters respectively, showing decent performance relative to Gemini Pro. The base and instruct-tuned model weights have been extracted and posted to HuggingFace. In AI model releases, Anthropic launched Claude 3.5 Sonnet, which outperforms GPT-4o on some benchmarks, is twice as fast as Opus, and is free to try. DeepSeek-Coder-V2 achieves 90.2% on HumanEval and 75.7% on MATH, surpassing GPT-4-Turbo-0409, with models up to 236B parameters and 128K context length. GLM-0520 from Zhipu AI/Tsinghua ranks highly in coding and overall benchmarks. NVIDIA announced Nemotron-4 340B, an open model family for synthetic data generation. Research highlights include TextGrad, a framework for automatic differentiation on textual feedback; PlanRAG, an iterative plan-then-RAG decision-making technique; a paper on goldfish loss to mitigate memorization in LLMs; and a tree search algorithm for language model agents.
There's Ilya!
chameleon-7b chameleon-34b deepseek-coder-v2 gpt-4-turbo claude-3-opus voco-llama safe-superintelligence-inc openai anthropic meta deepseek google-deepmind parallel-decoding code-generation quantization training-dynamics vision benchmarks datasets image-captioning reasoning memory-optimization ilya-sutskever jan-leike ylecun akhaliq philschmid rohanpaul_ai mervenoyann fchollet
Ilya Sutskever has co-founded Safe Superintelligence Inc shortly after leaving OpenAI, while Jan Leike moved to Anthropic. Meta released new models including Chameleon 7B and 34B with mixed-modal input and unified token space quantization. DeepSeek-Coder-V2 shows code capabilities comparable to GPT-4 Turbo, supporting 338 programming languages and 128K context length. Consistency Large Language Models (CLLMs) enable parallel decoding generating multiple tokens per step. Grokked Transformers demonstrate reasoning through training dynamics affecting memory formation and generalization. VoCo-LLaMA compresses vision tokens with LLMs improving video temporal correlation understanding. The BigCodeBench benchmark evaluates LLMs on 1,140 coding tasks across 139 Python libraries, topped by DeepSeek-Coder-V2 and Claude 3 Opus. PixelProse is a large 16M image-caption dataset with reduced toxicity.
ALL of AI Engineering in One Place
claude-3-sonnet claude-3 openai google-deepmind anthropic mistral-ai cohere hugging-face adept midjourney character-ai microsoft amazon nvidia salesforce mastercard palo-alto-networks axa novartis discord twilio tinder khan-academy sourcegraph mongodb neo4j hasura modular cognition anysphere perplexity-ai groq mozilla nous-research galileo unsloth langchain llamaindex instructor weights-biases lambda-labs neptune datastax crusoe covalent qdrant baseten e2b octo-ai gradient-ai lancedb log10 deepgram outlines crew-ai factory-ai interpretability feature-steering safety multilinguality multimodality rag evals-ops open-models code-generation gpus agents ai-leadership
The upcoming AI Engineer World's Fair in San Francisco from June 25-27 will feature a significantly expanded format with booths, talks, and workshops from top model labs like OpenAI, DeepMind, Anthropic, Mistral, Cohere, HuggingFace, and Character.ai. It includes participation from Microsoft Azure, Amazon AWS, Google Vertex, and major companies such as Nvidia, Salesforce, Mastercard, Palo Alto Networks, and more. The event covers 9 tracks including RAG, multimodality, evals/ops, open models, code generation, GPUs, agents, AI in Fortune 500, and a new AI leadership track. Additionally, Anthropic shared interpretability research on Claude 3 Sonnet, revealing millions of interpretable features that can be steered to modify model behavior, including safety-relevant features related to bias and unsafe content, though more research is needed for practical applications. The event offers a discount code for AI News readers.
Mixtral 8x22B Instruct sparks efficiency memes
mixtral-8x22b llama-2-7b olmo-7b mistral-ai hugging-face google microsoft intel softbank nvidia multilinguality math code-generation context-window model-performance model-release retrieval-augmented-generation deepfake ai-investment ai-chip hybrid-architecture training-data guillaume-lample osanseviero _philschmid svpino
Mistral released an instruct-tuned version of their Mixtral 8x22B model, notable for using only 39B active parameters during inference, outperforming larger models and supporting 5 languages with 64k context window and math/code capabilities. The model is available on Hugging Face under an Apache 2.0 license for local use. Google plans to invest over $100 billion in AI, with other giants like Microsoft, Intel, and SoftBank also making large investments. The UK criminalized non-consensual deepfake porn, raising enforcement debates. A former Nvidia employee claims Nvidia's AI chip lead is unmatchable this decade. AI companions could become a $1 billion market. AI has surpassed humans on several basic tasks but lags on complex ones. Zyphra introduced Zamba, a novel 7B parameter hybrid model outperforming LLaMA-2 7B and OLMo-7B with less training data, trained on 128 H100 GPUs over 30 days. GroundX API advances retrieval-augmented generation accuracy.
Evals-based AI Engineering
jamba bamboo qwen-1.5-moe grok-1.5 llama2-7b openai mistral-ai x-ai llamaindex evaluation fine-tuning prompt-engineering voice-cloning quantization model-optimization code-generation context-windows hamel-husain alec-radford
Hamel Husain emphasizes the importance of comprehensive evals in AI product development, highlighting evaluation, debugging, and behavior change as key iterative steps. OpenAI released a voice engine demo showcasing advanced voice cloning from small samples, raising safety concerns. Reddit discussions introduced new models like Jamba (hybrid Transformer-SSM with MoE), Bamboo (7B LLM with high sparsity based on Mistral), Qwen1.5-MoE (efficient parameter activation), and Grok 1.5 (128k context length, surpassing GPT-4 in code generation). Advances in quantization include 1-bit Llama2-7B models outperforming full precision and the QLLM quantization toolbox supporting GPTQ/AWQ/HQQ methods.
DBRX: Best open model (just not most efficient)
dbrx grok mixtral llama-2 mpt-7b gpt-4 databricks hugging-face mistral-ai mosaicml openai mixture-of-experts model-efficiency tokenization model-training code-generation model-architecture open-source-models benchmarking fine-tuning
Databricks Mosaic has released a new open-source model called DBRX that outperforms Grok, Mixtral, and Llama2 on evaluations while being about 2x more efficient than Llama2 and Grok. The model was trained on 12 trillion tokens using 3,000 H100 GPUs over 2 months, with an estimated compute cost of $10 million. It uses OpenAI's 100k tiktoken tokenizer and shows strong zero-shot code generation performance, even beating GPT-4 on the Humaneval benchmark. DBRX also upstreamed work to MegaBlocks open source. Despite its scale and efficiency, DBRX's performance on MMLU is only slightly better than Mixtral, raising questions about its scaling efficiency. The focus of DBRX is on enabling users to train models efficiently, with MoE training being about 2x more FLOP-efficient than dense models, achieving similar quality with nearly 4x less compute than previous MPT models. This release is part of the ongoing competition for open-source AI leadership, including models like Dolly, MPT, and Mistral. "If it activates 36B params, the model's perf should be equivalent to a 72B dense model or even 80B," says Qwen's tech lead.
Dia de las Secuelas (StarCoder, The Stack, Dune, SemiAnalysis)
starcoder-2 starcoder2-15b hugging-face bigcode code-generation model-training dataset-release model-performance dylan-patel
HuggingFace/BigCode has released StarCoder v2, including the StarCoder2-15B model trained on over 600 programming languages using the The Stack v2 dataset. This release marks a state-of-the-art achievement for models of this size, with opt-out requests excluded from training data. A detailed technical report is available, highlighting the model's capabilities and training methodology. Additionally, a live event featuring Dylan Patel discussing GPU economics is announced for San Francisco.
Qwen 1.5 Released
qwen-1.5 mistral-7b sparsetral-16x7b-v2 bagel-7b-v0.4 deepseek-math-7b-instruct deepseek qwen mistral-ai hugging-face meta-ai-fair quantization token-context multilinguality retrieval-augmented-generation agent-planning code-generation sparse-moe model-merging fine-tuning direct-preference-optimization character-generation ascii-art kanji-generation vr retinal-resolution light-field-passthrough frozen-networks normalization-layers
Chinese AI models Yi, Deepseek, and Qwen are gaining attention for strong performance, with Qwen 1.5 offering up to 32k token context and compatibility with Hugging Face transformers and quantized models. The TheBloke Discord discussed topics like quantization of a 70B LLM, the introduction of the Sparse MoE model Sparsetral based on Mistral, debates on merging vs fine-tuning, and Direct Preference Optimization (DPO) for character generation. The Nous Research AI Discord covered challenges in Japanese Kanji generation, AI scams on social media, and Meta's VR headset prototypes showcased at SIGGRAPH 2023. Discussions also included fine-tuning frozen networks and new models like bagel-7b-v0.4, DeepSeek-Math-7b-instruct, and Sparsetral-16x7B-v2.
CodeLLama 70B beats GPT4 on HumanEval
codellama miqu mistral-medium llama-2-70b aphrodite-engine mixtral flatdolphinmaid noromaid rpcal chatml mistral-7b activation-beacon eagle-7b rwkv-v5 openhermes2.5 nous-hermes-2-mixtral-8x7b-dpo imp-v1-3b bakllava moondream qwen-vl meta-ai-fair ollama nous-research mistral-ai hugging-face ai-ethics alignment gpu-optimization direct-prompt-optimization fine-tuning cuda-programming optimizer-technology quantization multimodality context-length dense-retrieval retrieval-augmented-generation multilinguality model-performance open-source code-generation classification vision
Meta AI surprised the community with the release of CodeLlama, an open-source model now available on platforms like Ollama and MLX for local use. The Miqu model sparked debate over its origins, possibly linked to Mistral Medium or a fine-tuned Llama-2-70b, alongside discussions on AI ethics and alignment risks. The Aphrodite engine showed strong performance on A6000 GPUs with specific configurations. Role-playing AI models such as Mixtral and Flatdolphinmaid faced challenges with repetitiveness, while Noromaid and Rpcal performed better, with ChatML and DPO recommended for improved responses. Learning resources like fast.ai's course were highlighted for ML/DL beginners, and fine-tuning techniques with optimizers like Paged 8bit lion and adafactor were discussed.
At Nous Research AI, the Activation Beacon project introduced a method for unlimited context length in LLMs using "global state" tokens, potentially transforming retrieval-augmented models. The Eagle-7B model, based on RWKV-v5, outperformed Mistral in benchmarks with efficiency and multilingual capabilities. OpenHermes2.5 was recommended for consumer hardware due to its quantization methods. Multimodal and domain-specific models like IMP v1-3b, Bakllava, Moondream, and Qwen-vl were explored for classification and vision-language tasks. The community emphasized centralizing AI resources for collaborative research.
12/15/2023: Mixtral-Instruct beats Gemini Pro (and matches GPT3.5)
mixtral gemini-pro gpt-3.5 gpt-4.5 gpt-4 chatgpt lmsys openai deepseek cloudflare huggingface performance context-window prompt-engineering privacy local-gpu cloud-gpu code-generation model-comparison model-usage api-errors karpathy
Thanks to a karpathy shoutout, lmsys now has enough data to rank mixtral and gemini pro. The discussion highlights the impressive performance of these state-of-the-art open-source models that can run on laptops. In the openai Discord, users compared AI tools like perplexity and chatgpt's browsing tool, favoring Perplexity for its superior data gathering, pricing, and usage limits. Interest was shown in AI's ability to convert large code files with deepseek coder recommended. Debates on privacy implications for AI advancement and challenges of running LLMs on local and cloud GPUs were prominent. Users reported issues with chatgpt including performance problems, loss of access to custom GPTs, and unauthorized access. Discussions also covered prompt engineering for large context windows and speculations about gpt-4.5 and gpt-4 future developments.
12/14/2023: $1e7 for Superalignment
gemini bard gpt-4 gpt-4.5 llama-2 openai llamaindex perplexity-ai prompt-engineering api custom-gpt json bug-fixes chatbots performance tts code-generation image-recognition jan-leike patrick-collison
Jan Leike is launching a new grant initiative inspired by Patrick Collison's Fast Grants to support AI research. OpenAI introduced a new developers Twitter handle @OpenAIDevs for community updates. Discussions on OpenAI's Gemini and Bard chatbots highlight their ability to read each other's instructions and offer unique coding solutions. Users reported various issues with GPT-4, including performance problems, customization difficulties, and a resolved bug in image recognition. There are ongoing conversations about prompt engineering challenges and new JSON mode support in Convo-lang for API use. Concerns about misuse of chatbots for illegal activities and alternatives like Llama2 models and the Perplexity chatbot were also discussed.