All tags
Company: "modal"
not much happened today
OpenAI expanded its Agents SDK by separating the agent harness from compute/storage, enabling long-running, durable agents with features like file/computer use, skills, memory, and compaction. The harness is now open-source and supports execution via partner sandboxes, fostering a new ecosystem with integrations from Cloudflare, Modal, Vercel, and others. Cloudflare launched Project Think, a next-gen Agents SDK with durable execution and sandboxed code, alongside Agent Lee, a prompt-driven UI agent using sandboxed TypeScript, and introduced real-time voice pipelines and browser automation tools. Hermes Agent focuses on persistent skill formation by learning from completed workflows, positioning itself as a professional agent distinct from GUI-first assistants like OpenClaw. "Hermes autonomously backfills tracking data, updates cron jobs, and saves workflows as reusable skills," highlighting its advanced workflow management capabilities.
Z.ai GLM-5: New SOTA Open Weights LLM
glm-5 glm-4.5 kimi-k2.5 zhipu-ai openrouter modal deepinfra ollama qoder vercel deepseek-sparse-attention long-context model-scaling pretraining benchmarking office-productivity context-window model-deployment cost-efficiency
Zhipu AI launched GLM-5, an Opus-class model scaling from 355B to 744B parameters with DeepSeek Sparse Attention integration for cost-efficient long-context serving. GLM-5 achieves SOTA on BrowseComp and leads on Vending Bench 2, focusing on office productivity tasks and surpassing Kimi K2.5 on the GDPVal-AA benchmark. Despite broad availability on platforms like OpenRouter, Modal, DeepInfra, and Ollama Cloud, GLM-5 faces compute constraints impacting rollout and pricing. The model supports up to 200K context length and 128K max output tokens.
not much happened today.
gpt-5.2-codex glm-4.7 openai cursor github cerebras modal artificial-analysis vllm long-running-tasks autonomous-agents code-generation inference-speed latency batch-inference gpu-scaling model-evaluation agent-systems operational-scaling swyx kevinweil pierceboggan mntruell scaling01
OpenAI launched GPT-5.2-Codex API, touted as their strongest coding model for long-running tasks and cybersecurity. Cursor integrated GPT-5.2-Codex to autonomously run a browser for a week, producing over 3 million lines of Rust code. GitHub incorporated it into their code tools, easing enterprise adoption. Discussions highlight the importance of review loops in agent systems and debate evaluation metrics for coding models. OpenAI partnered with Cerebras to improve inference speed and latency, with Cerebras serving GLM-4.7 at 1,445 tokens/sec and low latency. Provider benchmarking reveals tradeoffs in throughput, latency, and context window sizes. Modal shared operational scaling insights for self-hosted inference fleets of 20k GPUs, focusing on batch inference optimization with vLLM and FlashInfer backend. This reflects a focus on inference infrastructure, long-horizon autonomous agents, and coding model evaluation.