All tags
Model: "llama-3-405b"
s{imple|table|calable} Consistency Models
llama-3-70b llama-3-405b llama-3-1 stable-diffusion-3.5 gpt-4 stability-ai tesla cerebras cohere langchain model-distillation diffusion-models continuous-time-consistency-models image-generation ai-hardware inference-speed multilingual-models yang-song
Model distillation significantly accelerates diffusion models, enabling near real-time image generation with only 1-4 sampling steps, as seen in BlinkShot and Flux Schnell. Research led by Yang Song introduced simplified continuous-time consistency models (sCMs), achieving under 10% FID difference in just 2 steps and scaling up to 1.5B parameters for higher quality. On AI hardware, Tesla is deploying a 50k H100 cluster potentially capable of completing GPT-4 training in under three weeks, while Cerebras Systems set a new inference speed record on Llama 3.1 70B with their wafer-scale AI chips. Stability AI released Stable Diffusion 3.5 and its Turbo variant, and Cohere launched new multilingual models supporting 23 languages with state-of-the-art performance. LangChain also announced ecosystem updates.
CogVideoX: Zhipu's Open Source Sora
cogvideox llama-3-1 llama-3-405b moondream phi-3.5 llama-rank zhipu-ai alibaba meta-ai-fair google hugging-face nvidia togethercompute salesforce video-generation serverless-computing vision document-vqa text-vqa mixture-of-experts retrieval-augmented-generation long-context model-routing webgpu background-removal long-form-generation superposition-prompting rohanpaul_ai philschmid vikhyatk algo_diver jayalammar davidsholz
Zhipu AI, Alibaba's AI arm and China's 3rd largest AI lab, released the open 5B video generation model CogVIdeoX, which can run without GPUs via their ChatGLM web and desktop apps. Meta AI announced trust & safety research and CyberSecEval 3 alongside the release of Llama 3.1, with Llama 3 405B now available serverless on Google Cloud Vertex AI and Hugging Face x NVIDIA NIM API. Updates include Moondream, an open vision-language model improving DocVQA and TextVQA tasks, and the lightweight MoE chat model Phi-3.5 with 16x3.8B parameters. Together Compute introduced the Rerank API featuring Salesforce's LlamaRank model for document and code ranking. Research highlights include superposition prompting for RAG without fine-tuning, the AgentWrite pipeline for long-form content generation over 20,000 words, and a comparison showing Long Context methods outperform RAG at higher costs. Tools include Not Diamond, an AI model router, AI command line interfaces, and an open-source WebGPU background removal tool. "You don't even need GPUs to run it," referring to CogVIdeoX.
Apple Intelligence Beta + Segment Anything Model 2
llama-3-405b llama-3 segment-anything-model meta-ai-fair apple image-segmentation memory-attention video-processing pretraining cloud-tpus post-training synthetic-data instruction-following reasoning writing benchmarking bindureddy maximelabonne reach_vb
Meta advanced its open source AI with a sequel to the Segment Anything Model, enhancing image segmentation with memory attention for video applications using minimal data and compute. Apple Intelligence delayed its official release to iOS 18.1 in October but launched developer previews on MacOS Sequoia, iOS 18, and iPadOS 18, accompanied by a detailed 47-page paper revealing extensive pretraining on 6.3T tokens and use of Cloud TPUs rather than Apple Silicon. The paper highlights improvements in instruction following, reasoning, and writing through post-training and synthetic data. Benchmarks show Appleโs model scores lower than Llama 3, but with trusted human evaluations. Additionally, Meta released Llama 3.1 with a 405B parameter model, marking a significant open-source frontier model release.
Mistral Large 2 + RIP Mistral 7B, 8x7B, 8x22B
mistral-large-2 mistral-nemo-12b llama-3.1-8b llama-3.1-70b llama-3.1 llama-3-405b yi-34b-200k gpt-4o mistral-ai meta-ai-fair groq togethercompute code-generation math function-calling reasoning context-windows model-deprecation pretraining posttraining benchmarking
Mistral Large 2 introduces 123B parameters with Open Weights under a Research License, focusing on code generation, math performance, and a massive 128k context window, improving over Mistral Large 1's 32k context. It claims better function calling capabilities than GPT-4o and enhanced reasoning. Meanwhile, Meta officially released Llama-3.1 models including Llama-3.1-70B and Llama-3.1-8B with detailed pre-training and post-training insights. The Llama-3.1 8B model's 128k context performance was found underwhelming compared to Mistral Nemo and Yi 34B 200K. Mistral is deprecating older Apache open-source models, focusing on Large 2 and Mistral Nemo 12B. The news also highlights community discussions and benchmarking comparisons.
Llama 3.1: The Synthetic Data Model
llama-3-405b llama-3-1 llama-3 meta-ai-fair groq fireworks synthetic-data fine-tuning reinforcement-learning multilinguality long-context tool-use code-generation math model-licensing inference-speed model-deployment bindureddy thomas
Meta AI has released Llama 3.1, including a 405B parameter model that triggers regulatory considerations like the EU AI Act and SB 1047. The model incorporates extensive synthetic data techniques for code, math, multilinguality, long context, and tool use fine-tuning, with RLHF using synthetic preference data from Llama 2. The launch was coordinated across major inference providers, with Groq demonstrating 750 tokens per second inference speed and Fireworks leading in pricing. The updated license explicitly allows synthetic data generation, marking a significant step in open frontier-class LLMs and cost-efficiency improvements since March.