All tags
Topic: "parallel-processing"
OpenAI Codex App: death of the VSCode fork, multitasking worktrees, Skills Automations
codex openai agent-based-systems parallel-processing software-testing developer-workflows automation product-feedback-loop neurosymbolic-ai benchmarking sama reach_vb gdb skirano embirico ajambrosino thsottiaux nbaschez yuchenj_uw badlogicgames random_walker
OpenAI launched the Codex app on macOS as a dedicated agent-native command center for coding, featuring multiple agents in parallel, built-in worktrees for conflict isolation, skills for reusable bundles, and scheduled automations. The app emphasizes developer workflows like Plan mode for upfront task decomposition and is gaining positive adoption signals from insiders including @sama. There is movement towards ecosystem standardization of skills folders, signaling early conventions in agent tooling. Codex also exemplifies a "self-improving" product feedback loop combining humans and agents. In coding agents practice, best practices include a "test-first" approach to bug fixes, the "conductor" model where one developer manages 5-10 agents in parallel, and a neurosymbolic framing explaining why coding agents succeed due to software's verifiability and symbolic tooling. Benchmark skepticism remains about productivity studies that do not reflect agentic workflows.
Moonshot Kimi K2.5 - Beats Sonnet 4.5 at half the cost, SOTA Open Model, first Native Image+Video, 100 parallel Agent Swarm manager
kimi-k2.5 moonshotai multimodality model-training mixture-of-experts agentic-ai vision video-understanding model-optimization parallel-processing office-productivity
MoonshotAI's Kimi K2.5 is a 32B active-1T parameter open-weights model featuring native multimodality with image and video understanding, built through continual pretraining on 15 trillion mixed visual and text tokens. It introduces a new MoonViT vision encoder and supports advanced capabilities like Agent Swarm, which coordinates up to 100 sub-agents for parallel workflows, and an Office Productivity K2.5 Agent for large-scale office tasks. This release marks a significant leap in open models from China, claiming state-of-the-art results on benchmarks like HLE and BrowseComp, and offering aggressive API pricing and throughput.
ChatGPT Codex, OpenAI's first cloud SWE agent
codex-1 openai-o3 codex-mini gemma-3 blip3-o qwen-2.5 marigold-iid deepseek-v3 lightlab gemini-2.0 lumina-next openai runway salesforce qwen deepseek google google-deepmind j1 software-engineering parallel-processing multimodality diffusion-models depth-estimation scaling-laws reinforcement-learning fine-tuning model-performance multi-turn-conversation reasoning audio-processing sama kevinweil omarsar0 iscienceluvr akhaliq osanseviero c_valenzuelab mervenoyann arankomatsuzaki jasonwei demishassabis philschmid swyx teortaxestex jaseweston
OpenAI launched Codex, a cloud-based software engineering agent powered by codex-1 (an optimized version of OpenAI o3) available in research preview for Pro, Enterprise, and Team ChatGPT users, featuring parallel task execution like refactoring and bug fixing. The Codex CLI was enhanced with quick sign-in and a new low-latency model, codex-mini. Gemma 3 is highlighted as the best open model runnable on a single GPU. Runway released the Gen-4 References API for style transfer in generation. Salesforce introduced BLIP3-o, a unified multimodal model family using diffusion transformers for CLIP image features. The Qwen 2.5 models (1.5B and 3B versions) were integrated into the PocketPal app with various chat templates. Marigold IID, a new state-of-the-art open-source depth estimation model, was released.
In research, DeepSeek shared insights on scaling and hardware for DeepSeek-V3. Google unveiled LightLab, a diffusion-based light source control in images. Google DeepMind's AlphaEvolve uses Gemini 2.0 to discover new math and reduce costs without reinforcement learning. Omni-R1 studied audio's role in fine-tuning audio LLMs. Qwen proposed a parallel scaling law inspired by classifier-free guidance. Salesforce released Lumina-Next on the Qwen base, outperforming Janus-Pro. A study found LLM performance degrades in multi-turn conversations due to unreliability. J1 is incentivizing LLM-as-a-Judge thinking via reinforcement learning. A new Qwen study correlates question and strategy similarity to predict reasoning strategies.