All tags
Model: "miqu-70b"
Grok-1 in Bio
grok-1 mixtral miqu-70b claude-3-opus claude-3 claude-3-haiku xai mistral-ai perplexity-ai groq anthropic openai mixture-of-experts model-release model-performance benchmarking finetuning compute hardware-optimization mmlu model-architecture open-source memes sam-altman arthur-mensch daniel-han arav-srinivas francis-yao
Grok-1, a 314B parameter Mixture-of-Experts (MoE) model from xAI, has been released under an Apache 2.0 license, sparking discussions on its architecture, finetuning challenges, and performance compared to models like Mixtral and Miqu 70B. Despite its size, its MMLU benchmark performance is currently unimpressive, with expectations that Grok-2 will be more competitive. The model's weights and code are publicly available, encouraging community experimentation. Sam Altman highlighted the growing importance of compute resources, while Grok's potential deployment on Groq hardware was noted as a possible game-changer. Meanwhile, Anthropic's Claude continues to attract attention for its "spiritual" interaction experience and consistent ethical framework. The release also inspired memes and humor within the AI community.
MetaVoice & RIP Bard
mixtral nous-mixtral-dpo miqu-70b gpt-4 llama-2-70b-instruct llama-2 llama-2-70b llama-2-70b-instruct coqui metavoice google openai thebloke text-to-speech voice-cloning longform-synthesis prompt-engineering direct-preference-optimization lora-fine-tuning transformers gpu-acceleration apple-silicon content-authenticity metadata ai-censorship open-source-ai model-comparison usability model-limitations
Coqui, a TTS startup that recently shut down, inspired a new TTS model supporting voice cloning and longform synthesis from a small startup called MetaVoice. Google discontinued the Bard brand in favor of Gemini. On TheBloke Discord, discussions focused on AI training with models like Mixtral, Nous Mixtral DPO, and Miqu 70B, comparing them to OpenAI's GPT models, and debated prompt engineering, lorebooks, and removing safety features via LoRA fine-tuning on models such as Llama2 70B instruct. Technical topics included transformer layer offloading limitations and adapting LLaMa 2 for Apple Silicon. On OpenAI Discord, DALL-E images now include C2PA metadata for content authenticity, sparking debates on AI censorship, metadata manipulation, and open-source AI models versus commercial giants like GPT-4. Users discussed GPT-4 usability, limitations, and practical applications.
AI2 releases OLMo - the 4th open-everything LLM
olmo-1b olmo-7b olmo-65b miqu-70b mistral-medium distilbert-base-uncased ai2 allenai mistral-ai tsmc asml zeiss fine-tuning gpu-shortage embedding-chunking json-generation model-optimization reproducible-research self-correction vram-constraints programming-languages nathan-lambert lhc1921 mrdragonfox yashkhare_ gbourdin
AI2 is gaining attention in 2024 with its new OLMo models, including 1B and 7B sizes and a 65B model forthcoming, emphasizing open and reproducible research akin to Pythia. The Miqu-70B model, especially the Mistral Medium variant, is praised for self-correction and speed optimizations. Discussions in TheBloke Discord covered programming language preferences, VRAM constraints for large models, and fine-tuning experiments with Distilbert-base-uncased. The Mistral Discord highlighted challenges in the GPU shortage affecting semiconductor production involving TSMC, ASML, and Zeiss, debates on open-source versus proprietary models, and fine-tuning techniques including LoRA for low-resource languages. Community insights also touched on embedding chunking strategies and JSON output improvements.