All tags
Company: "stability-ai"
not much happened today
kernelllm-8b gpt-4o deepseek-v3 mistral-medium-3 qwen3 blip3-o xgen-small anisora stable-audio-open-small alphaevolve meta-ai-fair mistral-ai qwen deepseek salesforce bilibili stability-ai google benchmarking model-performance multilinguality hardware-optimization multimodality image-generation video-generation text-to-audio model-parallelism chain-of-thought instruction-following reasoning mitigation-strategies reach_vb lmarena_ai theadimeline adcock_brett jxmnop dair_ai omarsar0
Meta released KernelLLM 8B, outperforming GPT-4o and DeepSeek V3 on KernelBench-Triton Level 1. Mistral Medium 3 debuted strongly in multiple benchmarks. Qwen3 models introduced a unified framework with multilingual support. DeepSeek-V3 features hardware-aware co-design. BLIP3-o family released for multimodal tasks using diffusion transformers. Salesforce launched xGen-Small models excelling in long-context and math benchmarks. Bilibili released AniSORA for anime video generation. Stability AI open-sourced Stable Audio Open Small optimized for Arm devices. Google’s AlphaEvolve coding agent improved Strassen's algorithm for the first time since 1969. Research shows chain-of-thought reasoning can harm instruction-following ability, with mitigation strategies like classifier-selective reasoning being most effective, but reasoning techniques show high variance and limited generalization. "Chain-of-thought (CoT) reasoning can harm a model’s ability to follow instructions" and "Mitigation strategies such as few-shot in-context learning, self-reflection, self-selective reasoning, and classifier-selective reasoning can counteract reasoning-induced failures".
Halfmoon is Reve Image: a new SOTA Image Model from ex-Adobe/Stability trio
deepseek-v3-0324 qwen-2.5-vl-32b-instruct recraft artificial-analysis stability-ai adobe deepseek alibaba text-to-image prompt-understanding model-composition visual-generation language-understanding model-performance complex-prompting iterative-generation christian-cantrell taesung-park michael-gharbi
Reve, a new composite AI model from former Adobe and Stability alums Christian Cantrell, Taesung Park, and Michaël Gharbi, has emerged as the top-rated image generation model, surpassing previous state-of-the-art models like Recraft and Ideogram in text rendering and typography. The team emphasizes "enhancing visual generative models with logic" and "understanding user intent with advanced language capabilities" to iteratively amend visuals based on natural language input. Additionally, DeepSeek-V3-0324 and Alibaba's Qwen2.5-VL-32B-Instruct models were released with notable performance improvements, including better vision task benchmarks and mathematical reasoning.
not much happened today
oute-tts-0.3-1b oute-tts-0.3-500m olm-1b qwen-2.5-0.5b hover gpt-4o deepseek-v3 harvey meta-ai-fair stability-ai alibaba deepseek hugging-face text-to-speech zero-shot-learning multilinguality emotion-control motor-control reinforcement-learning local-ai distributed-inference pipeline-parallelism mathematical-reasoning process-reward-models legal-ai education-ai ai-security humor reach_vb drjimfan vikhyatk mervenoyann aiatmeta iscienceluvr alibaba_qwen awnihannun ajeya_cotra emollick qtnx_ designerx
Harvey secured a new $300M funding round. OuteTTS 0.3 1B & 500M text-to-speech models were released featuring zero-shot voice cloning, multilingual support (en, jp, ko, zh, fr, de), and emotion control, powered by OLMo-1B and Qwen 2.5 0.5B. The HOVER model, a 1.5M-parameter neural net for agile motor control, was introduced, leveraging human motion capture datasets and massively parallel reinforcement learning. kokoro.js enables running AI models locally in browsers with minimal dependencies. Meta AI awarded $200K LLM evaluation grants for projects on regional language understanding, complex reasoning, and interactive programming environments. Stability AI's Twitter account was hacked, prompting security warnings. Alibaba Qwen improved Process Reward Models (PRMs) for better mathematical reasoning using a consensus filtering mechanism. DeepSeek V3 uses pipeline parallelism to enhance distributed inference and long-context generation efficiency. Discussions on AI policy in legal frameworks and AI's role in democratizing education were highlighted. Lighthearted AI-related humor was also shared.
s{imple|table|calable} Consistency Models
llama-3-70b llama-3-405b llama-3-1 stable-diffusion-3.5 gpt-4 stability-ai tesla cerebras cohere langchain model-distillation diffusion-models continuous-time-consistency-models image-generation ai-hardware inference-speed multilingual-models yang-song
Model distillation significantly accelerates diffusion models, enabling near real-time image generation with only 1-4 sampling steps, as seen in BlinkShot and Flux Schnell. Research led by Yang Song introduced simplified continuous-time consistency models (sCMs), achieving under 10% FID difference in just 2 steps and scaling up to 1.5B parameters for higher quality. On AI hardware, Tesla is deploying a 50k H100 cluster potentially capable of completing GPT-4 training in under three weeks, while Cerebras Systems set a new inference speed record on Llama 3.1 70B with their wafer-scale AI chips. Stability AI released Stable Diffusion 3.5 and its Turbo variant, and Cohere launched new multilingual models supporting 23 languages with state-of-the-art performance. LangChain also announced ecosystem updates.
GPT4o August + 100% Structured Outputs for All (GPT4o mini edition)
gpt-4o-mini gpt-4o-2024-08-06 llama-3 bigllama-3.1-1t-instruct meta-llama-3-120b-instruct gemma-2-2b stability-ai unsloth-ai google hugging-face lora controlnet line-art gpu-performance multi-gpu-support fine-tuning prompt-formatting cloud-computing text-to-image-generation model-integration
Stability.ai users are leveraging LoRA and ControlNet for enhanced line art and artistic style transformations, while facing challenges with AMD GPUs due to the discontinuation of ZLUDA. Community tensions persist around the r/stablediffusion subreddit moderation. Unsloth AI users report fine-tuning difficulties with LLaMA3 models, especially with PPO trainer integration and prompt formatting, alongside anticipation for multi-GPU support and cost-effective cloud computing on RunPod. Google released the lightweight Gemma 2 2B model optimized for on-device use with 2.6B parameters, featuring safety and sparse autoencoder tools, and announced Diffusers integration for efficient text-to-image generation on limited resources.
How Carlini Uses AI
gemma-2-2b gpt-3.5-turbo-0613 mixtral-8x7b gen-3-alpha segment-anything-model-2 stable-fast-3d groq intel deepmind box figure-ai openai google meta-ai-fair nvidia stability-ai runway benchmarking adversarial-attacks large-language-models text-generation multimodality robotics emotion-detection structured-data-extraction real-time-processing teleoperation 3d-generation text-to-video nicholas-carlini chris-dixon rasbt
Groq's shareholders' net worth rises while others fall, with Intel's CEO expressing concern. Nicholas Carlini of DeepMind gains recognition and criticism for his extensive AI writings, including an 80,000-word treatise on AI use and a benchmark for large language models. Chris Dixon comments on AI Winter skepticism, emphasizing long-term impact. Box introduces an AI API for extracting structured data from documents, highlighting potential and risks of LLM-driven solutions. Recent AI developments include Figure AI launching the advanced humanoid robot Figure 02, OpenAI rolling out Advanced Voice Mode for ChatGPT with emotion detection, Google open-sourcing Gemma 2 2B model matching GPT-3.5-Turbo-0613 performance, Meta AI Fair releasing Segment Anything Model 2 (SAM 2) for real-time object tracking, NVIDIA showcasing Project GR00T for humanoid teleoperation with Apple Vision Pro, Stability AI launching Stable Fast 3D for rapid 3D asset generation, and Runway unveiling Gen-3 Alpha for AI text-to-video generation.
Execuhires: Tempting The Wrath of Khan
gemini-1.5-pro gpt-4o claude-3.5 flux-1 llama-3-1-405b character.ai google adept amazon inflection microsoft stability-ai black-forest-labs schelling google-deepmind openai anthropic meta-ai-fair lmsys langchainai execuhire model-benchmarking multilinguality math coding text-to-image agent-ide open-source-models post-training data-driven-performance noam-shazeer mostafa-mostaque david-friedman rob-rombach alexandr-wang svpino rohanpaul_ai
Character.ai's $2.5b execuhire to Google marks a significant leadership move alongside Adept's $429m execuhire to Amazon and Inflection's $650m execuhire to Microsoft. Despite strong user growth and content momentum, Character.ai's CEO Noam Shazeer returns to Google, signaling shifting vibes in the AI industry. Google DeepMind's Gemini 1.5 Pro tops Chatbot Arena benchmarks, outperforming GPT-4o and Claude-3.5, excelling in multilingual, math, and coding tasks. The launch of Black Forest Labs' FLUX.1 text-to-image model and LangGraph Studio agent IDE highlight ongoing innovation. Llama 3.1 405B is released as the largest open-source model, fostering developer use and competition with closed models. The industry is focusing increasingly on post-training and data as key competitive factors, raising questions about acquisition practices and regulatory scrutiny.
Rombach et al: FLUX.1 [pro|dev|schnell], $31m seed for Black Forest Labs
gemma-2-2b gpt-3.5-turbo-0613 mixtral-8x7b flux-1 stability-ai google-deepmind nvidia text-to-image text-to-video model-benchmarking open-weight-models model-distillation safety-classifiers sparse-autoencoders ai-coding-tools rohanpaul_ai fchollet bindureddy clementdelangue ylecun svpino
Stability AI co-founder Rombach launched FLUX.1, a new text-to-image model with three variants: pro (API only), dev (open-weight, non-commercial), and schnell (Apache 2.0). FLUX.1 outperforms Midjourney and Ideogram based on Black Forest Labs' ELO score and plans to expand into text-to-video. Google DeepMind released Gemma-2 2B, a 2 billion parameter open-source model that outperforms larger models like GPT-3.5-Turbo-0613 and Mixtral-8x7b on Chatbot Arena, optimized with NVIDIA TensorRT-LLM. The release includes safety classifiers (ShieldGemma) and sparse autoencoder analysis (Gemma Scope). Discussions highlight benchmarking discrepancies and US government support for open-weight AI models. Critiques of AI coding tools' productivity gains were also noted.
Is this... OpenQ*?
deepseek-coder-v2 llama-3-8b nemotron-4-340b stable-diffusion-3-medium deepseek_ai anthropic runwayml openai apple nvidia stability-ai luma-labs reward-tampering test-time-search mathematical-reasoning process-supervision fine-tuning on-device-ai video-generation cost-efficiency context-length coding image-understanding multimodality adcock_brett clementdelangue svpino
DeepSeekCoder V2 promises GPT4T-beating performance at a fraction of the cost. Anthropic released new research on reward tampering. Runway launched their Sora response and Gen-3 Alpha video generation model. A series of papers explore "test-time" search techniques improving mathematical reasoning with models like LLaMa-3 8B. Apple announced Apple Intelligence with smarter Siri and image/document understanding, partnered with OpenAI to integrate ChatGPT into iOS 18, and released 20 new CoreML models with LoRA fine-tuning for specialization. NVIDIA released Nemotron-4 340B, an open model matching GPT-4 performance. DeepSeek-Coder-V2 excels in coding and math with 338 programming languages and 128K context length. Stability AI released Stable Diffusion 3 Medium weights. Luma Labs launched Dream Machine for 5-second video generation from text and images.
The Last Hurrah of Stable Diffusion?
llama-3-8b llama-3 qwen-2 gpt-4 gpt-4o stability-ai togethercompute model-architecture fine-tuning benchmarks dataset-release model-evaluation reasoning model-training retrieval-augmented-generation multimodality emad-mostaque rohanpaul_ai fchollet mikeknoop micahgoldblum teknium1 rasbt percyliang
Stability AI launched Stable Diffusion 3 Medium with models ranging from 450M to 8B parameters, featuring the MMDiT architecture and T5 text encoder for image text rendering. The community has shown mixed reactions following the departure of key researchers like Emad Mostaque. On AI models, Llama 3 8B Instruct shows strong evaluation correlation with GPT-4, while Qwen 2 Instruct surpasses Llama 3 on MMLU benchmarks. The Mixture of Agents (MoA) framework outperforms GPT-4o on AlpacaEval 2.0. Techniques like Spectrum and QLoRA enable efficient fine-tuning with less VRAM. Research on grokking reveals transformers can transition from memorization to generalization through extended training. Benchmark initiatives include the $1M ARC Prize Challenge for AGI progress and LiveBench, a live LLM benchmark to prevent dataset contamination. The Character Codex Dataset offers open data on over 15,000 characters for RAG and synthetic data. The MLX 0.2 tool enhances LLM experience on Apple Silicon Macs with improved UI and faster retrieval-augmented generation.
Meta Llama 3 (8B, 70B)
llama-3-8b llama-3-70b llama-3-400b stable-diffusion-3 mixtral-8x22b-instruct-v0.1 vasa-1 meta-ai-fair stability-ai boston-dynamics microsoft mistral-ai hugging-face transformer tokenization model-training benchmarking robotics natural-language-processing real-time-processing synthetic-data dataset-cleaning behavior-trees ai-safety model-accuracy api model-release humor helen-toner
Meta partially released Llama 3 models including 8B and 70B variants, with a 400B variant still in training, touted as the first GPT-4 level open-source model. Stability AI launched Stable Diffusion 3 API with model weights coming soon, showing competitive realism against Midjourney V6. Boston Dynamics unveiled an electric humanoid robot Atlas, and Microsoft introduced the VASA-1 model generating lifelike talking faces at 40fps on RTX 4090. Mistral AI, a European OpenAI rival, is seeking $5B funding with its Mixtral-8x22B-Instruct-v0.1 model achieving 100% accuracy on 64K context benchmarks. AI safety discussions include calls from former OpenAI board member Helen Toner for audits of top AI companies, and the Mormon Church released AI usage principles. New AI development tools include Ctrl-Adapter for diffusion models, Distilabel 1.0.0 for synthetic dataset pipelines, Data Bonsai for data cleaning with LLMs, and Dendron for building LLM agents with behavior trees. Memes highlight AI development humor and cultural references. The release of Llama 3 models features improved reasoning, a 128K token vocabulary, 8K token sequences, and grouped query attention.
Mergestral, Meta MTIAv2, Cohere Rerank 3, Google Infini-Attention
mistral-8x22b command-r-plus rerank-3 infini-attention llama-3 sd-1.5 cosxl meta-ai-fair mistral-ai cohere google stability-ai hugging-face ollama model-merging training-accelerators retrieval-augmented-generation linear-attention long-context foundation-models image-generation rag-pipelines model-benchmarking context-length model-performance aidan_gomez ylecun swyx
Meta announced their new MTIAv2 chips designed for training and inference acceleration with improved architecture and integration with PyTorch 2.0. Mistral released the 8x22B Mixtral model, which was merged back into a dense model to effectively create a 22B Mistral model. Cohere launched Rerank 3, a foundation model enhancing enterprise search and retrieval-augmented generation (RAG) systems supporting 100+ languages. Google published a paper on Infini-attention, an ultra-scalable linear attention mechanism demonstrated on 1B and 8B models with 1 million sequence length. Additionally, Meta's Llama 3 is expected to start rolling out soon. Other notable updates include Command R+, an open model surpassing GPT-4 in chatbot performance with 128k context length, and advancements in Stable Diffusion models and RAG pipelines.
Cohere Command R+, Anthropic Claude Tool Use, OpenAI Finetuning
c4ai-command-r-plus claude-3 gpt-3.5-turbo gemini mistral-7b gemma-2 claude-3-5 llama-3 vicuna cohere anthropic openai microsoft stability-ai opera-software meta-ai-fair google-deepmind mistral-ai tool-use multilingual-models rag fine-tuning quantum-computing audio-generation local-inference context-windows model-size-analysis model-comparison
Cohere launched Command R+, a 104B dense model with 128k context length focusing on RAG, tool-use, and multilingual capabilities across 10 key languages. It supports Multi-Step Tool use and offers open weights for research. Anthropic introduced tool use in beta for Claude, supporting over 250 tools with new cookbooks for practical applications. OpenAI enhanced its fine-tuning API with new upgrades and case studies from Indeed, SK Telecom, and Harvey, promoting DIY fine-tuning and custom model training. Microsoft achieved a quantum computing breakthrough with an 800x error rate improvement and the most usable qubits to date. Stability AI released Stable Audio 2.0, improving audio generation quality and control. The Opera browser added local inference support for large language models like Meta's Llama, Google's Gemma, and Vicuna. Discussions on Reddit highlighted Gemini's large context window, analysis of GPT-3.5-Turbo model size, and a battle simulation between Claude 3 and ChatGPT using local 7B models like Mistral and Gemma.
ReALM: Reference Resolution As Language Modeling
flan-t5 gpt-4 apple openai hugging-face stability-ai reference-resolution finetuning quantization retrieval-augmented-generation open-source coding-agents podcast-generation image-generation ai-industry-trends takuto-takizawa
Apple is advancing in AI with a new approach called ReALM: Reference Resolution As Language Modeling, which improves understanding of ambiguous references using three contexts and finetunes a smaller FLAN-T5 model that outperforms GPT-4 on this task. In Reddit AI news, an open-source coding agent SWE-agent achieves 12.29% on the SWE-bench benchmark, and RAGFlow introduces a customizable retrieval-augmented generation engine. A new quantization method, QuaRot, enables efficient 4-bit inference. AI applications include a t-shirt design generator, podgenai for GPT-4 based podcast generation, and an open-source model from HuggingFace that runs without a GPU. Industry discussions focus on the impact of large language models on the AI field and efforts to decentralize AI development. Takuto Takizawa joins Stability AI Japan as Head of Sales & Partnerships.
Andrew likes Agents
gpt-3.5 gpt-4 cyberrealistic_v40 platypus-xl sdxl-lightning openai stability-ai agents human-eval-benchmark fine-tuning local-llm-deployment inference-speed image-generation lora upscaling workflow-optimization andrew-ng lilian-weng emad
Andrew Ng's The Batch writeup on Agents highlighted the significant improvement in coding benchmark performance when using an iterative agent workflow, with GPT-3.5 wrapped in an agent loop achieving up to 95.1% correctness on HumanEval, surpassing GPT-4 zero-shot at 67.0%. The report also covers new developments in Stable Diffusion models like Cyberrealistic_v40, Platypus XL, and SDXL Lightning for Naruto-style image generation, alongside innovations in LoRA and upscaling techniques. Discussions on local LLM deployment and optimization focus on hardware setups and finetuning strategies for efficient inference and multi-user serving. Emad's departure from Stability AI and new Sora videos from OpenAI were also noted.
Shipping and Dipping: Inflection + Stability edition
inflection-ai-2.5 stable-diffusion-3 claude-3-haiku claude-3-sonnet claude-3-opus tacticai inflection-ai stability-ai microsoft nvidia google-deepmind anthropic executive-departures gpu-acceleration ai-assistants geometric-deep-learning ai-integration ai-cost-reduction ai-job-displacement ai-healthcare model-release mustafa-suleyman
Inflection AI and Stability AI recently shipped major updates (Inflection AI 2.5 and Stable Diffusion 3) but are now experiencing significant executive departures, signaling potential consolidation in the GPU-rich startup space. Mustafa Suleyman has joined Microsoft AI as CEO, overseeing consumer AI products like Copilot, Bing, and Edge. Microsoft Azure is collaborating with NVIDIA on the Grace Blackwell 200 Superchip. Google DeepMind announced TacticAI, an AI assistant for football tactics developed with Liverpool FC, using geometric deep learning and achieving 90% expert approval in blind tests. Anthropic released Claude 3 Haiku and Claude 3 Sonnet on Google Cloud's Vertex AI, with Claude 3 Opus coming soon. Concerns about AI job displacement arise as NVIDIA introduces AI nurses that outperform humans at bedside manner at 90% lower cost.
World_sim.exe
gpt-4 gpt-4o grok-1 llama-cpp claude-3-opus claude-3 gpt-5 nvidia nous-research stability-ai hugging-face langchain anthropic openai multimodality foundation-models hardware-optimization model-quantization float4 float6 retrieval-augmented-generation text-to-video prompt-engineering long-form-rag gpu-optimization philosophy-of-ai agi-predictions jensen-huang yann-lecun sam-altman
NVIDIA announced Project GR00T, a foundation model for humanoid robot learning using multimodal instructions, built on their tech stack including Isaac Lab, OSMO, and Jetson Thor. They revealed the DGX Grace-Blackwell GB200 with over 1 exaflop compute, capable of training GPT-4 1.8T parameters in 90 days on 2000 Blackwells. Jensen Huang confirmed GPT-4 has 1.8 trillion parameters. The new GB200 GPU supports float4/6 precision with ~3 bits per parameter and achieves 40,000 TFLOPs on fp4 with 2x sparsity.
Open source highlights include the release of Grok-1, a 340B parameter model, and Stability AI's SV3D, an open-source text-to-video generation solution. Nous Research collaborated on implementing Steering Vectors in Llama.CPP.
In Retrieval Augmented Generation (RAG), a new 5.5-hour tutorial builds a pipeline using open-source HF models, and LangChain released a video on query routing and announced integration with NVIDIA NIM for GPU-optimized LLM inference.
Prominent opinions include Yann LeCun distinguishing language from other cognitive abilities, Sam Altman predicting AGI arrival in 6 years with a leap from GPT-4 to GPT-5 comparable to GPT-3 to GPT-4, and discussions on the philosophical status of LLMs like Claude. There is also advice against training models from scratch for most companies.
Stable Diffusion 3 — Rombach & Esser did it again!
stable-diffusion-3 claude-3 orca dolphincoder-starcoder2-15b stability-ai anthropic microsoft latitude perplexity-ai llamaindex tripo-ai diffusion-models multimodality benchmarking human-evaluation text-generation image-generation 3d-modeling fine-tuning roleplay coding dataset-release soumith-chintala bill-peebles swyx kevinafischer jeremyphoward akhaliq karinanguyen_ aravsrinivas
Over 2500 new community members joined following Soumith Chintala's shoutout, highlighting growing interest in SOTA LLM-based summarization. The major highlight is the detailed paper release of Stable Diffusion 3 (SD3), showcasing advanced text-in-image control and complex prompt handling, with the model outperforming other SOTA image generation models in human-evaluated benchmarks. The SD3 model is based on an enhanced Diffusion Transformer architecture called MMDiT. Meanwhile, Anthropic released Claude 3 models, noted for human-like responses and emotional depth, scoring 79.88% on HumanEval but costing over twice as much as GPT-4. Microsoft launched new Orca-based models and datasets, and Latitude released DolphinCoder-StarCoder2-15b with strong coding capabilities. Integration of image models by Perplexity AI and 3D CAD generation by PolySpectra powered by LlamaIndex were also highlighted. "SD3's win rate beats all other SOTA image gen models (except perhaps Ideogram)" and "Claude 3 models are very good at generating d3 visualizations from text descriptions."