All tags
Company: "eleutherai"
GPT-4o: the new SOTA-EVERYTHING Frontier model (GPT4T version)
gpt-4o gpt-3.5 llama-3 openai hugging-face nous-research eleutherai hazyresearch real-time-reasoning coding-capabilities fine-tuning knowledge-distillation hardware-optimization quantization multimodality mixture-of-experts efficient-attention model-scaling depth-upscaling transformer-architecture gpu-optimization prompt-engineering
OpenAI launched GPT-4o, a frontier model supporting real-time reasoning across audio, vision, and text, now free for all ChatGPT users with enhanced coding capabilities and upcoming advanced voice and video features. Discussions cover open-source LLMs like Llama 3, fine-tuning techniques including knowledge distillation for GPT-3.5, and hardware optimization strategies such as quantization. Emerging architectures include multimodal integrations with ChatGPT voice and Open Interpreter API, Mixture of Experts models combining autoregressive and diffusion approaches, and novel designs like the YOCO architecture and ThunderKittens DSL for efficient GPU use. Research advances in efficient attention methods like Conv-Basis using FFT and model scaling techniques such as depth upscaling were also highlighted.
RWKV "Eagle" v5: Your move, Mamba
rwkv-v5 mistral-7b miqu-1-70b mistral-medium llama-2 mistral-instruct-v0.2 mistral-tuna llama-2-13b kunoichi-dpo-v2-7b gpt-4 eleutherai mistral-ai hugging-face llamaindex nous-research rwkv lmsys fine-tuning multilinguality rotary-position-embedding model-optimization model-performance quantization speed-optimization prompt-engineering model-benchmarking reinforcement-learning andrej-karpathy
RWKV v5 Eagle was released with better-than-mistral-7b evaluation results, trading some English performance for multilingual capabilities. The mysterious miqu-1-70b model sparked debate about its origins, possibly a leak or distillation of Mistral Medium or a fine-tuned Llama 2. Discussions highlighted fine-tuning techniques, including the effectiveness of 1,000 high-quality prompts over larger mixed-quality datasets, and tools like Deepspeed, Axolotl, and QLoRA. The Nous Research AI community emphasized the impact of Rotary Position Embedding (RoPE) theta settings on LLM extrapolation, improving models like Mistral Instruct v0.2. Speed improvements in Mistral Tuna kernels reduced token processing costs, enhancing efficiency. The launch of Eagle 7B with 7.52B parameters showcased strong multilingual performance, surpassing other 7B class models.