All tags
Person: "kimmonismus"
not much happened today
qwen-3.5-0.8b qwen-3.5-2b qwen-3.5-4b qwen-3.5-9b codex-5.3 claude-3 alibaba ollama lm-studio openai anthropic multimodality reinforcement-learning long-context hybrid-attention on-device-ai model-deployment agent-reliability agent-observability coding-agents benchmarking runtime-optimization token-efficiency nrehiew_ kimmonismus lioronai danielhanchen theo htihle teortaxestex theprimeagen yuchenj_uw _lewtun saen_dev _philschmid omarsar0
Alibaba released the Qwen 3.5 series with models ranging from 0.8B to 9B parameters, featuring native multimodality, scaled reinforcement learning, and targeting edge and lightweight agent deployments. The models support very long context windows up to 262K tokens (extendable to 1M) and use a novel Gated DeltaNet hybrid attention architecture combining linear and full attention layers. Deployment examples include Ollama and LM Studio, with a notable 6-bit on-device demo on iPhone 17 Pro. Evaluators are cautioned that reasoning is disabled by default on smaller models. In coding agents, Codex 5.3 shows promising benchmark results on WeirdML with 79.3% accuracy, though availability and downtime remain critical challenges, especially highlighted by Claude outages. Agent reliability and observability are emphasized as cross-functional problems requiring clear success criteria and practical evaluation strategies. Studies show that using AGENTS.md and SKILL.md guardrails can significantly reduce runtime and token usage by mitigating worst-case thrashing in coding workflows.
Claude Sonnet 4.6: clean upgrade of 4.5, mostly better with some caveats
claude-3-sonnet-4.6 claude-3-sonnet-4.5 claude-3-opus-4.5 claude-3-opus-4.6 anthropic cursor microsoft perplexity-ai cognition long-context agent-planning knowledge-work benchmarking tokenization model-integration code-execution model-updates aesthetic-quality alexalbert__ scaling01 rishdotblog claudeai kimmonismus artificialanlys
Anthropic launched Claude Sonnet 4.6, an upgrade over Sonnet 4.5, featuring broad improvements in coding, long-context reasoning, agent planning, knowledge work, and design, plus a 1M-token context window (beta). Benchmarks show Sonnet 4.6 leading on GDPval-AA ELO 1633, with significant token usage increases and improved output aesthetics. Integrations include Cursor, Windsurf, Microsoft Foundry, and Perplexity Pro/Max. Early user feedback noted some regression issues that were later fixed. Pricing remains the same as Sonnet 4.5. Tooling enhancements include code execution for filtering results, improving accuracy and efficiency.