All tags
Model: "qwen3-max"
Qwen3.5-397B-A17B: the smallest Open-Opus class, very efficient model
qwen3.5-397b-a17b qwen3.5-plus qwen3-max qwen3-vl kimi alibaba openai deepseek z-ai minimax kimi unsloth ollama vllm native-multimodality spatial-intelligence sparse-moe long-context model-quantization model-architecture model-deployment inference-optimization apache-2.0-license pete_steinberger justinlin610
Alibaba released Qwen3.5-397B-A17B, an open-weight model featuring native multimodality, spatial intelligence, and a hybrid linear attention + sparse MoE architecture supporting 201 languages and long context windows up to 256K tokens. The model shows improvements over previous versions like Qwen3-Max and Qwen3-VL, with a sparsity ratio of about 4.3%. Community discussions highlighted the Gated Delta Networks enabling efficient inference despite large model size (~800GB BF16), with successful local runs on Apple Silicon using quantization techniques. The hosted API version, Qwen3.5-Plus, supports 1M context and integrates search and code interpreter features. This release follows other Chinese labs like Z.ai, Minimax, and Kimi in refreshing large models. The model is licensed under Apache-2.0 and is expected to be the last major release before DeepSeek v4. The news also notes Pete Steinberger joining OpenAI.
not much happened today
qwen3-max qwen3-vl qwen3-coder-plus gpt-5-codex code-world-model-32b claude-sonnet-4 claude-opus-4.1 alibaba openai meta-ai-fair huggingface anthropic microsoft github context-windows code-generation model-releases model-benchmarking api model-optimization multimodality software-engineering model-training huybery akhaliq lmarena_ai gdb ylecun pierceboggan julesagent
Alibaba unveiled the Qwen3 model family including Qwen3-Max and Qwen3-VL with a native 256K context window expandable to 1M, strong OCR in 32 languages, and rapid release velocity (~3.5 releases/month) backed by a $52B infrastructure roadmap. OpenAI launched GPT-5 Codex, an agent-optimized coding model with up to 400K context and adaptive reasoning priced at $1.25/$10 per million tokens, integrated into Cline and benchmarked in WebDev arenas. Meta AI FAIR released the open-weight Code World Model (CWM) 32B, a dense code generation model with strong benchmark scores (e.g., 65.8% SWE-bench Verified, 96.6% Math-500) and public safety reports. Ecosystem updates include GitHub Copilot's new embedding model for faster code search and Anthropic's Claude Sonnet 4 and Opus 4.1 integration into Microsoft 365 Copilot. The vLLM 0.10.2 update introduces Decode Context Parallel (DCP) for improved system performance.
Alibaba Yunqi: 7 models released in 4 days (Qwen3-Max, Qwen3-Omni, Qwen3-VL) and $52B roadmap
qwen3-max qwen3-omni qwen3-vl qwen3guard qwen3-livetranslate qwen3-tts-flash qwen-image-edit qwen3coder qwen alibaba alicloud tool-use large-model-coding reasoning multimodality model-release model-updates industry-application scaling fine-tuning reinforcement-learning junyang_lin eddie_wu alibaba_wan
Alibaba's Tongyi Qianwen (Qwen) team launched major updates including the 1T parameter Qwen3-Max, Qwen3-Omni, and Qwen3-VL models, alongside specialized versions like Qwen3Guard, Qwen3-LiveTranslate, Qwen3-TTS-Flash, Qwen-Image-Edit, and Qwen3Coder. At the AliCloud Yunqi (Apsara) conference, CEO Eddie Wu outlined a $52B roadmap emphasizing two AI development stages: "intelligence emergence" focusing on learning from humans and reasoning, and "autonomous action" highlighting AI's tool use and real-world task execution. The updates showcase advances in tool use, large-model coding capabilities, and AI's expanding role across industries such as logistics, manufacturing, biomedicine, and finance. Junyang Lin and Alibaba Wan are key spokespersons for these developments. The Qwen project is now seen as a "frontier lab" for AI innovation.