All tags
Model: "grok-4-fast"
not much happened today
claude-3-sonnet claude-3-opus gpt-5-codex grok-4-fast qwen-3-next gemini-2.5-pro sora-2-pro ray-3 kling-2.5 veo-3 modernvbert anthropic x-ai google google-labs openai arena epoch-ai mit luma akhaliq coding-agents cybersecurity api model-taxonomy model-ranking video-generation benchmarking multi-modal-generation retrieval image-text-retrieval finbarrtimbers gauravisnotme justinlin610 billpeeb apples_jimmy akhaliq
Anthropic announces a new CTO. Frontier coding agents see updates with Claude Sonnet 4.5 showing strong cybersecurity and polished UX but trailing GPT-5 Codex in coding capability. xAI Grok Code Fast claims higher edit success at lower cost. Google's Jules coding agent launches a programmable API with CI/CD integration. Qwen clarifies its model taxonomy and API tiers. Vision/LM Arena rankings show a tight competition among Claude Sonnet 4.5, Claude Opus 4.1, Gemini 2.5 Pro, and OpenAI's latest models. In video generation, Sora 2 Pro leads App Store rankings with rapid iteration and a new creator ecosystem; early tests show it answers GPQA-style questions at 55% accuracy versus GPT-5's 72%. Video Arena adds new models like Luma's Ray 3 and Kling 2.5 for benchmarking. Multi-modal video+audio generation model Ovi (Veo-3-like) is released. Retrieval models include ModernVBERT from MIT with efficient image-text retrieval capabilities. "Claude Sonnet 4.5 is basically the same as Opus 4.1 for coding" and "Jules is a programmable team member" highlight key insights.
Grok 4 Fast: Xai's distilled, 40% more token efficient, 2m context, 344 tok/s frontier model
grok-4-fast magistral-1.2 moondream-3 granite-docling-258m sail-vl2 xai meta-ai-fair mistral-ai ibm bytedance efficiency reasoning vision multimodality model-optimization model-deployment vision-encoders model-architecture model-training nearcyan aidangomez _akhaliq vikhyatk rohanpaul_ai
xAI announced Grok 4 Fast, a highly efficient model running at 344 tokens/second, offering reasoning and nonreasoning modes and free trials on major platforms. Meta showcased its neural band and Ray-Ban Display with a live demo that experienced hiccups but sparked discussion on live hardware demos and integration challenges. Meta is also developing a first-party "Horizon Engine" for AI rendering and released Quest-native Gaussian Splatting capture tech. New model releases include Mistral's Magistral 1.2, a compact multimodal vision-language model with improved benchmarks and local deployment; Moondream 3, a 9B-parameter MoE VLM focused on efficient visual reasoning; IBM's Granite-Docling-258M, a document VLM for layout-faithful PDF to HTML/Markdown conversion; and ByteDance's SAIL-VL2, a vision-language foundation model excelling at multimodal understanding and reasoning at 2B and 8B parameter scales.