All tags
Topic: "neurosymbolic-ai"
OpenAI Codex App: death of the VSCode fork, multitasking worktrees, Skills Automations
codex openai agent-based-systems parallel-processing software-testing developer-workflows automation product-feedback-loop neurosymbolic-ai benchmarking sama reach_vb gdb skirano embirico ajambrosino thsottiaux nbaschez yuchenj_uw badlogicgames random_walker
OpenAI launched the Codex app on macOS as a dedicated agent-native command center for coding, featuring multiple agents in parallel, built-in worktrees for conflict isolation, skills for reusable bundles, and scheduled automations. The app emphasizes developer workflows like Plan mode for upfront task decomposition and is gaining positive adoption signals from insiders including @sama. There is movement towards ecosystem standardization of skills folders, signaling early conventions in agent tooling. Codex also exemplifies a "self-improving" product feedback loop combining humans and agents. In coding agents practice, best practices include a "test-first" approach to bug fixes, the "conductor" model where one developer manages 5-10 agents in parallel, and a neurosymbolic framing explaining why coding agents succeed due to software's verifiability and symbolic tooling. Benchmark skepticism remains about productivity studies that do not reflect agentic workflows.
AlphaProof + AlphaGeometry2 reach 1 point short of IMO Gold
gemini alphageometry-2 alphaproof llama-3-1-405b llama-3-70b llama-3-8b mistral-large-2 google-deepmind meta-ai-fair mistral-ai neurosymbolic-ai mathematical-reasoning synthetic-data knowledge-sharing model-fine-tuning alpha-zero multilinguality context-windows model-scaling benchmarking performance-comparison tim-gowers guillaume-lample osanseviero
Search+Verifier highlights advances in neurosymbolic AI during the 2024 Math Olympics. Google DeepMind's combination of AlphaProof and AlphaGeometry 2 solved four out of six IMO problems, with AlphaProof being a finetuned Gemini model using an AlphaZero approach, and AlphaGeometry 2 trained on significantly more synthetic data with a novel knowledge-sharing mechanism. Despite impressive results, human judges noted the AI required much longer time than human competitors. Meanwhile, Meta AI released Llama 3.1 with a 405B parameter model and smaller variants, and Mistral AI launched Mistral Large 2 with 123B parameters and 128k context windows, outperforming Llama 3.1 on coding tasks and multilingual benchmarks. This marks significant progress in AI mathematical reasoning, model scaling, and multilingual capabilities.