All tags
Topic: "gpu-kernels"
not much happened today
gemini gemini-robotics-er-1.6 gpt-5.4-cyber deepagents-0.5 google tencent google-deepmind openai hugging-face cursor langchain agent-infrastructure cuda-optimization visual-reasoning spatial-reasoning gpu-kernels multi-agent-systems memory-management async-systems multimodality prompt-caching software-engineering robotics clementdelangue dylantfwang antoinersx steveschoettler teknium aiqiang888 sydneyrunkle
Google introduced Skills in Chrome, enabling reusable browser workflows with Gemini prompts and a library of ready-made Skills, enhancing end-user agentization. Tencent teased HYWorld 2.0, an open-source 3D world model generating editable scenes from a single image. Google DeepMind released Gemini Robotics-ER 1.6, improving visual/spatial reasoning for robotics with 93% instrument-reading success. OpenAI expanded Trusted Access with GPT-5.4-Cyber, a fine-tuned model for defensive security workflows. Hugging Face launched Kernels on the Hub, offering GPU kernel repos with 1.7xโ2.5x speedups. Cursor showcased a multi-agent CUDA optimization system with a 38% speedup across 235 problems. The Hermes Agent stack advanced to v0.9.0 with enhanced reliability, memory management, and integrations, while LangChain pushed deepagents 0.5 toward deployable, multi-tenant async systems with multimodal support and prompt caching. "Hermesโ key advantage is operational stability, extensibility, and deployability."
GPT 5.4: SOTA Knowledge Work -and- Coding -and- CUA Model, OpenAI is so very back
gpt-5.4 gpt-5.4-pro openai cursor_ai perplexity_ai arena native-computer-use long-context efficiency steering benchmarking gpu-kernels attention-mechanisms algorithmic-optimization pipeline-optimization sama reach_vb scaling01 danshipper yuchenj_uw
OpenAI launched GPT-5.4 and GPT-5.4 Pro with unified mainline and Codex models, featuring native computer use, up to ~1M token context, and efficiency improvements including a new Codex
/fast mode. Benchmarks showed strong results like OSWorld-Verified 75.0% surpassing human baseline and GDPval 83% against industry pros. User feedback highlighted coding utility but raised concerns about pricing and overthinking. Integration with devtools like Cursor, Perplexity, and Arena was announced. In systems research, FlashAttention-4 (FA4) was introduced with near-matmul speed attention on Blackwell GPUs, featuring innovations like polynomial exp emulation and online softmax. "Steering mid-response" and "fewer tokens, faster speed" were emphasized as UX and efficiency improvements.