All tags
Topic: "performance-comparison"
AlphaProof + AlphaGeometry2 reach 1 point short of IMO Gold
gemini alphageometry-2 alphaproof llama-3-1-405b llama-3-70b llama-3-8b mistral-large-2 google-deepmind meta-ai-fair mistral-ai neurosymbolic-ai mathematical-reasoning synthetic-data knowledge-sharing model-fine-tuning alpha-zero multilinguality context-windows model-scaling benchmarking performance-comparison tim-gowers guillaume-lample osanseviero
Search+Verifier highlights advances in neurosymbolic AI during the 2024 Math Olympics. Google DeepMind's combination of AlphaProof and AlphaGeometry 2 solved four out of six IMO problems, with AlphaProof being a finetuned Gemini model using an AlphaZero approach, and AlphaGeometry 2 trained on significantly more synthetic data with a novel knowledge-sharing mechanism. Despite impressive results, human judges noted the AI required much longer time than human competitors. Meanwhile, Meta AI released Llama 3.1 with a 405B parameter model and smaller variants, and Mistral AI launched Mistral Large 2 with 123B parameters and 128k context windows, outperforming Llama 3.1 on coding tasks and multilingual benchmarks. This marks significant progress in AI mathematical reasoning, model scaling, and multilingual capabilities.
Sama says: GPT-5 soon
gpt-5 mixtral-7b gpt-3.5 gemini-pro gpt-4 llama-cpp openai codium thebloke amd hugging-face mixture-of-experts fine-tuning model-merging 8-bit-optimization gpu-acceleration performance-comparison command-line-ai vector-stores embeddings coding-capabilities sam-altman ilya-sutskever itamar andrej-karpathy
Sam Altman at Davos highlighted that his top priority is launching the new model, likely called GPT-5, while expressing uncertainty about Ilya Sutskever's employment status. Itamar from Codium introduced the concept of Flow Engineering with AlphaCodium, gaining attention from Andrej Karpathy. On the TheBloke Discord, engineers discussed a multi-specialty mixture-of-experts (MOE) model combining seven distinct 7 billion parameter models specialized in law, finance, and medicine. Debates on 8-bit fine-tuning and the use of bitsandbytes with GPU support were prominent. Discussions also covered model merging using tools like Mergekit and compatibility with Alpaca format. Interest in optimizing AI models on AMD hardware using AOCL blas and lapack libraries with llama.cpp was noted. Users experimented with AI for command line tasks, and the Mixtral MoE model was refined to surpass larger models in coding ability. Comparisons among LLMs such as GPT-3.5, Mixtral, Gemini Pro, and GPT-4 focused on knowledge depth, problem-solving, and speed, especially for coding tasks.