All tags
Topic: "rlhf"
Bespoke-Stratos + Sky-T1: The Vicuna+Alpaca moment for reasoning
sky-t1-32b-preview qwen-2.5-32b r1 o1-preview gpt-4o claude-3-sonnet bespoke-stratos-32b gemini-2.0-flash-thinking berkeley usc deepseek bespoke-labs google llmsys stanford lm-sys reasoning supervised-finetuning reinforcement-learning multimodality model-distillation context-windows code-execution model-repeatability behavioral-self-awareness rlhf teortaxestex cwolferesearch madiator chakraai philschmid abacaj omarsar0
Reasoning Distillation has emerged as a key technique, with Berkeley/USC researchers releasing Sky-T1-32B-Preview, a finetuned model of Qwen 2.5 32B using 17k reasoning traces for just $450, matching benchmarks of o1-preview. DeepSeek introduced R1, a model surpassing o1-preview and enabling distillation to smaller models like a 1.5B Qwen to match gpt-4o and claude-3-sonnet levels. Bespoke Labs further distilled R1 on Qwen, outperforming o1-preview with fewer samples. This progress suggests that "SFT is all you need" for reasoning without major architecture changes. Additionally, DeepSeek-R1 uses pure reinforcement learning with supervised finetuning to accelerate convergence and shows strong reasoning and multimodal capabilities. Google's Gemini 2.0 Flash Thinking model boasts a 1 million token context window, code execution, and excels in math, science, and multimodal reasoning. Critiques highlight challenges in model repeatability, behavioral self-awareness, and RLHF limitations in reasoning robustness.
That GPT-4o Demo
gpt-4o gemma-2 meta-code-llama openai google-deepmind meta-ai-fair voice-generation ocr screen-sharing vision code-understanding model-customization efficiency textual-intelligence multimodal-agents sft distillation rlhf model-merging model-optimization safety romain-huet fchollet
Romain Huet demonstrated an unreleased version of GPT-4o on ChatGPT Desktop showcasing capabilities like low latency voice generation, whisper tone moderation, camera mode streaming video to GPT-4o, rapid OCR, screen sharing with ChatGPT for programming help, clipboard reading, and vision-based code conversation. OpenAI's four investment areas highlighted include textual intelligence, efficiency/cost, model customization, and multimodal agents. Google DeepMind released Gemma 2 models in 9B and 27B sizes trained on 8T and 13T tokens respectively, using SFT, distillation, RLHF, and model merging, optimized for TPUv5e with strong performance and safety measures. Meta AI announced the Meta LLM Compiler built on Meta Code Llama with enhanced code optimization and compiler features.
AI gets Memory
miqumaid-v2-70b mixtral-8x7b-qlora mistral-7b phi-2 medalpaca aya openai langchain thebloke cohere unsloth-ai mistral-ai microsoft rag memory-modeling context-windows open-source finetuning sequential-fine-tuning direct-preference-optimization rlhf ppo javascript-python-integration hardware-optimization gpu-overclocking quantization model-training large-context multilinguality joanne-jang
AI Discords analysis covered 20 guilds, 312 channels, and 6901 messages. The report highlights the divergence of RAG style operations for context and memory, with implementations like MemGPT rolling out in ChatGPT and LangChain. The TheBloke Discord discussed open-source large language models such as the Large World Model with contexts up to 1 million tokens, and the Cohere aya model supporting 101 languages. Roleplay-focused models like MiquMaid-v2-70B were noted for performance improvements with enhanced hardware. Finetuning techniques like Sequential Fine-Tuning (SFT) and Direct Preference Optimization (DPO) were explained, with tools like Unsloth AI's apply_chat_template preferred over Alpaca. Integration of JavaScript and Python via JSPyBridge in the SillyTavern project was also discussed. Training challenges with Mixtral 8x7b qlora versus Mistral 7b were noted. The LM Studio Discord focused on hardware limitations affecting large model loading, medical LLMs like medAlpaca, and hardware discussions around GPU upgrades and overclocking. Anticipation for IQ3_XSS 1.5 bit quantization support in LM Studio was expressed.