All tags
Model: "kimi-k2.5"
Z.ai GLM-5: New SOTA Open Weights LLM
glm-5 glm-4.5 kimi-k2.5 zhipu-ai openrouter modal deepinfra ollama qoder vercel deepseek-sparse-attention long-context model-scaling pretraining benchmarking office-productivity context-window model-deployment cost-efficiency
Zhipu AI launched GLM-5, an Opus-class model scaling from 355B to 744B parameters with DeepSeek Sparse Attention integration for cost-efficient long-context serving. GLM-5 achieves SOTA on BrowseComp and leads on Vending Bench 2, focusing on office productivity tasks and surpassing Kimi K2.5 on the GDPVal-AA benchmark. Despite broad availability on platforms like OpenRouter, Modal, DeepInfra, and Ollama Cloud, GLM-5 faces compute constraints impacting rollout and pricing. The model supports up to 200K context length and 128K max output tokens.
not much happened today
gpt-5.2 claude-opus-4.5 kimi-k2.5 openai anthropic deeplearningai langchain apple agentic-ai multimodality coding self-verification agent-engineering model-benchmarking model-optimization workflow-automation
AI News for 1/27/2026-1/28/2026 highlights a quiet day with deep dives into frontier model "personality split" where GPT-5.2 excels at exploration and Claude Opus 4.5 at exploitation, suggesting OpenAI suits research workflows and Anthropic commercial reliability. The rise of agentic coding loops shows new failure modes, with self-verification workflows gaining traction. The open-model Kimi K2.5 emerges as a flashpoint, boasting enhanced agent execution, multimodality, and coding polish, runnable on Apple silicon M3 Ultra Mac Studios with Thunderbolt 5 (RDMA), and challenging Claude Opus 4.5 on benchmarks and pricing. Licensing issues threaten enterprise adoption despite model quality. The meme "clawdbot" reflects rapid agent branding proliferation. Agent engineering advances with shared "skills" interfaces promoted by DeepLearning.AI, Anthropic, and LangChain.
Moonshot Kimi K2.5 - Beats Sonnet 4.5 at half the cost, SOTA Open Model, first Native Image+Video, 100 parallel Agent Swarm manager
kimi-k2.5 moonshotai multimodality model-training mixture-of-experts agentic-ai vision video-understanding model-optimization parallel-processing office-productivity
MoonshotAI's Kimi K2.5 is a 32B active-1T parameter open-weights model featuring native multimodality with image and video understanding, built through continual pretraining on 15 trillion mixed visual and text tokens. It introduces a new MoonViT vision encoder and supports advanced capabilities like Agent Swarm, which coordinates up to 100 sub-agents for parallel workflows, and an Office Productivity K2.5 Agent for large-scale office tasks. This release marks a significant leap in open models from China, claiming state-of-the-art results on benchmarks like HLE and BrowseComp, and offering aggressive API pricing and throughput.