All tags
Topic: "long-running-tasks"
not much happened today.
gpt-5.2-codex glm-4.7 openai cursor github cerebras modal artificial-analysis vllm long-running-tasks autonomous-agents code-generation inference-speed latency batch-inference gpu-scaling model-evaluation agent-systems operational-scaling swyx kevinweil pierceboggan mntruell scaling01
OpenAI launched GPT-5.2-Codex API, touted as their strongest coding model for long-running tasks and cybersecurity. Cursor integrated GPT-5.2-Codex to autonomously run a browser for a week, producing over 3 million lines of Rust code. GitHub incorporated it into their code tools, easing enterprise adoption. Discussions highlight the importance of review loops in agent systems and debate evaluation metrics for coding models. OpenAI partnered with Cerebras to improve inference speed and latency, with Cerebras serving GLM-4.7 at 1,445 tokens/sec and low latency. Provider benchmarking reveals tradeoffs in throughput, latency, and context window sizes. Modal shared operational scaling insights for self-hosted inference fleets of 20k GPUs, focusing on batch inference optimization with vLLM and FlashInfer backend. This reflects a focus on inference infrastructure, long-horizon autonomous agents, and coding model evaluation.