All tags
Company: "lambdaapi"
DeepSeek v4
deepseek-v4 deepseek-v4-pro deepseek-v4-flash kimi-k2.6 glm-5.1 xiaomi-mimo-v2.5-pro gpt-5.5 gpt-5.5-pro deepseek nvidia openai lambdaapi togethercompute xiaomi long-context mixture-of-experts model-quantization memory-optimization hardware-model-co-design inference-speed agent-integration token-efficiency model-deployment open-weights reasoning hallucination-detection scaling01 ben_burtenshaw artificialanlys
DeepSeek-V4 technical release features a 1.6T-parameter MoE with 49B active parameters and 1M-token context, showcasing hybrid attention and compressed KV schemes for major memory reductions. It ranks as the #2 open-weights reasoning model behind Kimi K2.6 but has a high hallucination rate and higher serving costs. Hardware-model co-design is emphasized, with NVIDIA Blackwell Ultra delivering 150+ TPS/user and support for FP4 and FP8 quantization enabling deployment on single nodes. Positioning among open Chinese models is competitive with GLM-5.1 and Xiaomi MiMo V2.5 Pro. Meanwhile, OpenAI launched GPT-5.5 and GPT-5.5 Pro APIs with a 1M context window, focusing on improved long-running workflows and token efficiency, quickly integrated into tools like GitHub Copilot and Cursor. "GPT-5.5 handles complex, tool-heavy, ambiguous workflows with fewer retries," highlighting rapid distribution and agent integration.