All tags
Person: "justinlin610"
not much happened today
gemini-3.1-flash-lite gemini-3 gpt-5.3 gpt-5.4 qwen google-deepmind google openai alibaba multimodality latency throughput context-window model-pricing model-benchmarking model-performance conversational-ai hallucination-reduction api model-rollout leadership-exit jeffdean noamshazeer sundarpichai aidan_mclau justinlin610
Google DeepMind launched Gemini 3.1 Flash-Lite, emphasizing dynamic thinking levels for adjustable compute, with notable metrics like $0.25/M input, $1.50/M output, 1432 Elo on LMArena, and 2.5× faster time-to-first-token than Gemini 2.5 Flash. It supports a 1M context window and high throughput for multimodal inputs including text, images, video, audio, and PDFs. OpenAI rolled out GPT-5.3 Instant to all ChatGPT users, improving conversational naturalness and reducing hallucinations by 26.8% with search. The upcoming GPT-5.4 was teased amid speculation. Alibaba's Qwen faces leadership exits, raising concerns about its future and open-source status. The news highlights advancements in model efficiency, pricing, and multimodality, alongside organizational changes impacting AI development.
Claude Code Anniversary + Launches from: Qwen 3.5, Cursor Demos, Cognition Devin 2.2, Inception Mercury 2
qwen3.5-flash qwen3.5-35b-a3b qwen3.5-122b-a10b qwen3.5-27b qwen3.5-397b-a17b gpt-5.3-codex claude-code alibaba openai anthropic cursor huggingface model-architecture reinforcement-learning quantization context-windows agentic-ai api websockets software-ux enterprise-workflows model-deployment awnihannun andrew_n_carr justinlin610 unslothai terryyuezhuo haihaoshen 0xsero ali_tongyilab scaling01 gdb noahzweben _catwu
Alibaba launched the Qwen 3.5 Medium Model Series featuring models like Qwen3.5-Flash, Qwen3.5-35B-A3B (MoE), and Qwen3.5-122B-A10B (MoE) emphasizing efficiency over scale with innovations like 1M context and INT4 quantization. OpenAI released GPT-5.3-Codex via the Responses API with enhanced file input support and faster web socket-based throughput. Anthropic introduced Claude Code Remote Control enabling terminal session continuation from mobile and expanded enterprise workflow features. Cursor shifted UX to agent demo videos instead of diffs, highlighting new interaction modes.
Qwen3.5-397B-A17B: the smallest Open-Opus class, very efficient model
qwen3.5-397b-a17b qwen3.5-plus qwen3-max qwen3-vl kimi alibaba openai deepseek z-ai minimax kimi unsloth ollama vllm native-multimodality spatial-intelligence sparse-moe long-context model-quantization model-architecture model-deployment inference-optimization apache-2.0-license pete_steinberger justinlin610
Alibaba released Qwen3.5-397B-A17B, an open-weight model featuring native multimodality, spatial intelligence, and a hybrid linear attention + sparse MoE architecture supporting 201 languages and long context windows up to 256K tokens. The model shows improvements over previous versions like Qwen3-Max and Qwen3-VL, with a sparsity ratio of about 4.3%. Community discussions highlighted the Gated Delta Networks enabling efficient inference despite large model size (~800GB BF16), with successful local runs on Apple Silicon using quantization techniques. The hosted API version, Qwen3.5-Plus, supports 1M context and integrates search and code interpreter features. This release follows other Chinese labs like Z.ai, Minimax, and Kimi in refreshing large models. The model is licensed under Apache-2.0 and is expected to be the last major release before DeepSeek v4. The news also notes Pete Steinberger joining OpenAI.
not much happened today
claude-3-7-sonnet gpt-4-1 gemini-3 qwen3-vl-embedding qwen3-vl-reranker glm-4-7 falcon-h1r-7b jamba2 stanford google google-deepmind alibaba z-ai tii ai21-labs huggingface copyright-extraction multimodality multilinguality retrieval-augmented-generation model-architecture mixture-of-experts model-quantization reasoning inference kernel-engineering memory-optimization enterprise-ai sundarpichai justinlin610
Stanford paper reveals Claude 3.7 Sonnet memorized 95.8% of Harry Potter 1, highlighting copyright extraction risks compared to GPT-4.1. Google AI Studio sponsors TailwindCSS amid OSS funding debates. Google and Sundar Pichai launch Gmail Gemini 3 features including AI Overviews and natural-language search with user controls. Alibaba Qwen releases Qwen3-VL-Embedding and Qwen3-VL-Reranker, a multimodal, multilingual retrieval stack supporting text, images, and video with quantization and instruction customization, achieving strong benchmark results. Z.ai goes public on HKEX with GLM-4.7 leading the Artificial Analysis Intelligence Index v4.0, showing gains in reasoning, coding, and agentic use, with large-scale MoE architecture and MIT license. Falcon-H1R-7B from TII targets efficient reasoning in smaller models, scoring 16 on the Intelligence Index. AI21 Labs introduces Jamba2, a memory-efficient enterprise model with hybrid SSM-Transformer architecture and Apache 2.0 license, available via SaaS and Hugging Face. vLLM shows throughput improvements in inference and kernel engineering. "Embeddings should be multimodal by default," notes Justin Lin.
not much happened today
claude-3-sonnet claude-3-opus gpt-5-codex grok-4-fast qwen-3-next gemini-2.5-pro sora-2-pro ray-3 kling-2.5 veo-3 modernvbert anthropic x-ai google google-labs openai arena epoch-ai mit luma akhaliq coding-agents cybersecurity api model-taxonomy model-ranking video-generation benchmarking multi-modal-generation retrieval image-text-retrieval finbarrtimbers gauravisnotme justinlin610 billpeeb apples_jimmy akhaliq
Anthropic announces a new CTO. Frontier coding agents see updates with Claude Sonnet 4.5 showing strong cybersecurity and polished UX but trailing GPT-5 Codex in coding capability. xAI Grok Code Fast claims higher edit success at lower cost. Google's Jules coding agent launches a programmable API with CI/CD integration. Qwen clarifies its model taxonomy and API tiers. Vision/LM Arena rankings show a tight competition among Claude Sonnet 4.5, Claude Opus 4.1, Gemini 2.5 Pro, and OpenAI's latest models. In video generation, Sora 2 Pro leads App Store rankings with rapid iteration and a new creator ecosystem; early tests show it answers GPQA-style questions at 55% accuracy versus GPT-5's 72%. Video Arena adds new models like Luma's Ray 3 and Kling 2.5 for benchmarking. Multi-modal video+audio generation model Ovi (Veo-3-like) is released. Retrieval models include ModernVBERT from MIT with efficient image-text retrieval capabilities. "Claude Sonnet 4.5 is basically the same as Opus 4.1 for coding" and "Jules is a programmable team member" highlight key insights.
Qwen3-Next-80B-A3B-Base: Towards Ultimate Training & Inference Efficiency
qwen3-next qwen3 mixtral-8x7b gemini-2.5-pro alibaba mistral-ai deepseek snowflake hugging-face baseten nvidia mixture-of-experts model-sparsity gated-attention hybrid-architecture rmsnorm model-stability model-training inference-optimization multi-token-prediction model-deployment justinlin610 teortaxestex yuchenj_uw
MoE (Mixture of Experts) models have become essential in frontier AI models, with Qwen3-Next pushing sparsity further by activating only 3.7% of parameters (3B out of 80B) using a hybrid architecture combining Gated DeltaNet and Gated Attention. This new design includes 512 total experts (10 routed + 1 shared), Zero-Centered RMSNorm for stability, and improved MoE router initialization, resulting in ~10× cheaper training and 10× faster inference compared to previous models. Alibaba's Qwen3-Next reportedly outperforms Gemini-2.5-Flash-Thinking and approaches the flagship 235B model's performance, with deployments on Hugging Face, Baseten, and native vLLM support for efficient inference.