All tags
Person: "arthur-mensch"
Pixtral Large (124B) beats Llama 3.2 90B with updated Mistral Large 24.11
pixtral-large mistral-large-24.11 llama-3-2 qwen2.5-7b-instruct-abliterated-v2-gguf qwen2.5-32b-q3_k_m vllm llama-cpp exllamav2 tabbyapi mistral-ai sambanova nvidia multimodality vision model-updates chatbots inference gpu-optimization quantization performance concurrency kv-cache arthur-mensch
Mistral has updated its Pixtral Large vision encoder to 1B parameters and released an update to the 123B parameter Mistral Large 24.11 model, though the update lacks major new features. Pixtral Large outperforms Llama 3.2 90B on multimodal benchmarks despite having a smaller vision adapter. Mistral's Le Chat chatbot received comprehensive feature updates, reflecting a company focus on product and research balance as noted by Arthur Mensch. SambaNova sponsors inference with their RDUs offering faster AI model processing than GPUs. On Reddit, vLLM shows strong concurrency performance on an RTX 3090 GPU, with quantization challenges noted in FP8 kv-cache but better results using llama.cpp with Q8 kv-cache. Users discuss performance trade-offs between vLLM, exllamav2, and TabbyAPI for different model sizes and batching strategies.
Multi-modal, Multi-Aspect, Multi-Form-Factor AI
gpt-4 idefics-2-8b mistral-instruct apple-mlx gpt-5 reka-ai cohere google rewind apple mistral-ai microsoft paypal multimodality foundation-models embedding-models gpu-performance model-comparison enterprise-data open-source performance-optimization job-impact agi-criticism technical-report arthur-mensch dan-schulman chris-bishop
Between April 12-15, Reka Core launched a new GPT4-class multimodal foundation model with a detailed technical report described as "full Shazeer." Cohere Compass introduced a foundation embedding model for indexing and searching multi-aspect enterprise data like emails and invoices. The open-source IDEFICS 2-8B model continues Google's Flamingo multimodal model reproduction. Rewind pivoted to a multi-platform app called Limitless, moving away from spyware. Reddit discussions highlighted Apple MLX outperforming Ollama and Mistral Instruct on M2 Ultra GPUs, GPU choices for LLMs and Stable Diffusion, and AI-human comparisons by Microsoft Research's Chris Bishop. Former PayPal CEO Dan Schulman predicted GPT-5 will drastically reduce job scopes by 80%. Mistral CEO Arthur Mensch criticized the obsession with AGI as "creating God."
Grok-1 in Bio
grok-1 mixtral miqu-70b claude-3-opus claude-3 claude-3-haiku xai mistral-ai perplexity-ai groq anthropic openai mixture-of-experts model-release model-performance benchmarking finetuning compute hardware-optimization mmlu model-architecture open-source memes sam-altman arthur-mensch daniel-han arav-srinivas francis-yao
Grok-1, a 314B parameter Mixture-of-Experts (MoE) model from xAI, has been released under an Apache 2.0 license, sparking discussions on its architecture, finetuning challenges, and performance compared to models like Mixtral and Miqu 70B. Despite its size, its MMLU benchmark performance is currently unimpressive, with expectations that Grok-2 will be more competitive. The model's weights and code are publicly available, encouraging community experimentation. Sam Altman highlighted the growing importance of compute resources, while Grok's potential deployment on Groq hardware was noted as a possible game-changer. Meanwhile, Anthropic's Claude continues to attract attention for its "spiritual" interaction experience and consistent ethical framework. The release also inspired memes and humor within the AI community.