All tags
Person: "mira-murati"
X.ai Grok 3 and Mira Murati's Thinking Machines
grok-3 grok-3-mini gemini-2-pro gpt-4o o3-mini-high o1 deepseek-r1 anthropic openai thinking-machines benchmarking reasoning reinforcement-learning coding multimodality safety alignment research-publishing model-performance creative-ai mira-murati lmarena_ai karpathy omarsar0 ibab arankomatsuzaki iscienceluvr scaling01
Grok 3 has launched with mixed opinions but strong benchmark performance, notably outperforming models like Gemini 2 Pro and GPT-4o. The Grok-3 mini variant shows competitive and sometimes superior capabilities, especially in reasoning and coding, with reinforcement learning playing a key role. Mira Murati has publicly shared her post-OpenAI plan, founding the frontier lab Thinking Machines, focusing on collaborative, personalizable AI, multimodality, and empirical safety and alignment research, reminiscent of Anthropic's approach.
not much happened today
llama mistral openai decagon sierra togethercompute vertical-saas funding protein-structure-prediction lora self-supervised-learning model-optimization neural-architecture-search model-evaluation ethics transformers multi-agent-systems long-context mira-murati demis-hassabis clement-delangue john-o-whitaker yann-lecun francois-chollet ajeya-cotra rohan-paul adcock-brett
Vertical SaaS agents are gaining rapid consensus as the future of AI applications, highlighted by Decagon's $100m funding and Sierra's $4b round. OpenAI alumni are actively raising venture capital and forming new startups, intensifying competition in the AI market. Demis Hassabis celebrated the Nobel Prize recognition for AlphaFold2, a breakthrough in protein structure prediction. Advances in AI models include techniques like LoRA projectors and annealing on high-quality data, while discussions emphasize the need for high-bandwidth sensory inputs beyond language for common sense learning. New methods like LoLCATs aim to optimize transformer models such as Llama and Mistral for efficiency. Ethical concerns about AI agents performing harmful tasks remain under investigation. The AI community continues to explore model evaluation challenges and optimization frameworks like LPZero for neural architecture search.
not much happened today
llama-3-2 llama-3 gemma-2 phi-3-5-mini claude-3-haiku gpt-4o-mini molmo gemini-1.5 gemini meta-ai-fair openai allenai google-deepmind multimodality model-optimization benchmarks ai-safety model-distillation pruning adapter-layers open-source-models performance context-windows mira-murati demis-hassabis ylecun sama
Meta AI released Llama 3.2 models including 1B, 3B text-only and 11B, 90B vision variants with 128K token context length and adapter layers for image-text integration. These models outperform competitors like Gemma 2 and Phi 3.5-mini, and are supported on major platforms including AWS, Azure, and Google Cloud. OpenAI CTO Mira Murati announced her departure. Allen AI released Molmo, an open-source multimodal model family outperforming proprietary systems. Google improved Gemini 1.5 with Flash and Pro models. Meta showcased Project Orion AR glasses and hinted at a Quest 3S priced at $300. Discussions covered new benchmarks for multimodal models, model optimization, and AI safety and alignment.
Llama 3.2: On-device 1B/3B, and Multimodal 11B/90B (with AI2 Molmo kicker)
llama-3-2 llama-3-1 claude-3-haiku gpt-4o-mini molmo-72b molmo-7b gemma-2 phi-3-5 llama-3-2-vision llama-3-2-3b llama-3-2-20b meta-ai-fair ai2 qualcomm mediatek arm ollama together-ai fireworks-ai weights-biases cohere weaviate multimodality vision context-windows quantization model-release tokenization model-performance model-optimization rag model-training instruction-following mira-murati daniel-han
Meta released Llama 3.2 with new multimodal versions including 3B and 20B vision adapters on a frozen Llama 3.1, showing competitive performance against Claude Haiku and GPT-4o-mini. AI2 launched multimodal Molmo 72B and 7B models outperforming Llama 3.2 in vision tasks. Meta also introduced new 128k-context 1B and 3B models competing with Gemma 2 and Phi 3.5, with collaborations hinted with Qualcomm, Mediatek, and Arm for on-device AI. The release includes a 9 trillion token count for Llama 1B and 3B. Partner launches include Ollama, Together AI offering free 11B model access, and Fireworks AI. Additionally, a new RAG++ course from Weights & Biases, Cohere, and Weaviate offers systematic evaluation and deployment guidance for retrieval-augmented generation systems based on extensive production experience.