All tags
Company: "modal"
Z.ai GLM-5: New SOTA Open Weights LLM
glm-5 glm-4.5 kimi-k2.5 zhipu-ai openrouter modal deepinfra ollama qoder vercel deepseek-sparse-attention long-context model-scaling pretraining benchmarking office-productivity context-window model-deployment cost-efficiency
Zhipu AI launched GLM-5, an Opus-class model scaling from 355B to 744B parameters with DeepSeek Sparse Attention integration for cost-efficient long-context serving. GLM-5 achieves SOTA on BrowseComp and leads on Vending Bench 2, focusing on office productivity tasks and surpassing Kimi K2.5 on the GDPVal-AA benchmark. Despite broad availability on platforms like OpenRouter, Modal, DeepInfra, and Ollama Cloud, GLM-5 faces compute constraints impacting rollout and pricing. The model supports up to 200K context length and 128K max output tokens.
not much happened today.
gpt-5.2-codex glm-4.7 openai cursor github cerebras modal artificial-analysis vllm long-running-tasks autonomous-agents code-generation inference-speed latency batch-inference gpu-scaling model-evaluation agent-systems operational-scaling swyx kevinweil pierceboggan mntruell scaling01
OpenAI launched GPT-5.2-Codex API, touted as their strongest coding model for long-running tasks and cybersecurity. Cursor integrated GPT-5.2-Codex to autonomously run a browser for a week, producing over 3 million lines of Rust code. GitHub incorporated it into their code tools, easing enterprise adoption. Discussions highlight the importance of review loops in agent systems and debate evaluation metrics for coding models. OpenAI partnered with Cerebras to improve inference speed and latency, with Cerebras serving GLM-4.7 at 1,445 tokens/sec and low latency. Provider benchmarking reveals tradeoffs in throughput, latency, and context window sizes. Modal shared operational scaling insights for self-hosted inference fleets of 20k GPUs, focusing on batch inference optimization with vLLM and FlashInfer backend. This reflects a focus on inference infrastructure, long-horizon autonomous agents, and coding model evaluation.