All tags
Company: "opencode"
not much happened today
kimi-k2.6 qwen-3.6-max-preview moonshot alibaba vllm openrouter cloudflare baseten mlx nous-research opencode ollama mixture-of-experts multimodality int4-quantization long-context agentic-coding multi-agent-systems model-orchestration memory-consolidation llm-driven-replanning dynamic-context-injection
Moonshot's Kimi K2.6 is a major open-weight 1T-parameter MoE model featuring 32B active parameters, 384 experts, MLA attention, 256K context window, native multimodality, and INT4 quantization. It supports day-0 integration with platforms like vLLM, OpenRouter, Cloudflare Workers AI, and others, showcasing state-of-the-art performance on benchmarks such as HLE w/ tools 54.0, SWE-Bench Pro 58.6, and Math Vision w/ python 93.2. The model excels in long-horizon execution with over 4,000 tool calls, 12+ hour continuous runs, and 300 parallel sub-agents. Meanwhile, Alibaba's Qwen3.6-Max-Preview previewed enhanced agentic coding, improved world knowledge, and instruction following, with notable performance on AIME 2026 #15 and ranking in Code Arena. Hermes Agent is rapidly expanding its ecosystem, surpassing 100K GitHub stars and integrating with tools like Ollama and Copilot CLI, while pioneering advanced multi-agent orchestration techniques such as stateless ephemeral units, LLM-driven replanning, and dynamic context injection. These developments highlight the competitive momentum of Chinese open and semi-open labs in coding and agent models.
MiniMax 2.7: GLM-5 at 1/3 cost SOTA Open Model
minimax-m2.7 sonnet-4.6 glm-5 mimo-v2-pro mamba-3 qwen-3.5 kimi-k2.5 gpt-5.4-mini minimax xiaomi artificial-analysis ollama trae yupp openrouter vercel zo opencode kilocode cartesia self-evolving-agents reasoning cost-efficiency token-efficiency hybrid-architecture harness-engineering agent-harnesses skills memory-optimization architecture feedback-loops api inference execution-environment
MiniMax M2.7 is the headline model release, described as a "self-evolving agent" with strong performance metrics including 56.22% on SWE-Pro, 57.0% on Terminal Bench 2, and parity with Sonnet 4.6. It features recursive self-improvement in skills, memory, and architecture. Artificial Analysis places M2.7 on the cost/performance frontier with an Intelligence Index score of 50, matching GLM-5 (Reasoning) but at a fraction of the cost. Distribution is available via platforms like Ollama cloud and OpenRouter. Xiaomi’s MiMo-V2-Pro is noted as a serious Chinese API-only reasoning model with a score of 49 on the Intelligence Index and favorable token efficiency. Cartesia’s Mamba-3 is highlighted as an SSM optimized for inference-heavy use, with early reactions focusing on hybrid transformer architectures like Qwen3.5 and Kimi Linear. The report emphasizes a shift from prompting to harness engineering, where the execution environment and agent harnesses, including skills and MCP, are becoming key differentiators in AI system design. This includes discussions on tools, repo legibility, constraints, and feedback loops, with mentions of DSPy and GPT-5.4 mini as important components in this evolving landscape.