All tags
Company: "mixtral"
Nemotron-4-340B: NVIDIA's new large open models, built on syndata, great for syndata
nemotron-4-340b mixtral llama-3 gemini-1.5 gpt-4o mamba-2-hybrid-8b samba-3.8b-instruct dolphin-2.9.3 faro-yi-9b-dpo nvidia hugging-face mistral-ai llamaindex cohere gemini mistral synthetic-data model-alignment reward-models fine-tuning long-context model-scaling inference-speed mixture-of-agents open-source-models model-training instruction-following context-windows philipp-schmid bryan-catanzaro oleksii-kuchaiev rohanpaul_ai cognitivecompai _philschmid 01ai_yi
NVIDIA has scaled up its Nemotron-4 model from 15B to a massive 340B dense model, trained on 9T tokens, achieving performance comparable to GPT-4. The model alignment process uses over 98% synthetic data, with only about 20K human-annotated samples for fine-tuning and reward model training. The synthetic data generation pipeline is open-sourced, including synthetic prompts and preference data generation. The base and instruct versions outperform Mixtral and Llama 3, while the reward model ranks better than Gemini 1.5, Cohere, and GPT-4o. Other notable models include Mamba-2-Hybrid 8B, which is up to 8x faster than Transformers and excels on long-context tasks, Samba-3.8B-instruct for infinite context length with linear complexity, Dolphin-2.9.3 tiny models optimized for low-resource devices, and Faro Yi 9B DPO with a 200K context window running efficiently on 16GB VRAM. The Mixture-of-Agents technique boosts open-source LLMs beyond GPT-4 Omni on AlpacaEval 2.0.
Talaria: Apple's new MLOps Superweapon
gemma mixtral phi dbrx apple google mistral-ai microsoft mosaic quantization on-device-ai adapter-models model-optimization model-latency lossless-quantization low-bit-palletization token-generation model-benchmarking human-evaluation craig-federighi andrej-karpathy
Apple Intelligence introduces a small (~3B parameters) on-device model and a larger server model running on Apple Silicon with Private Cloud Compute, aiming to surpass Google Gemma, Mistral Mixtral, Microsoft Phi, and Mosaic DBRX. The on-device model features a novel lossless quantization strategy using mixed 2-bit and 4-bit LoRA adapters averaging 3.5 bits-per-weight, enabling dynamic adapter hot-swapping and efficient memory management. Apple credits the Talaria tool for optimizing quantization and model latency, achieving about 0.6 ms time-to-first-token latency and 30 tokens per second generation rate on iPhone 15 Pro. Apple focuses on an "adapter for everything" strategy with initial deployment on SiriKit and App Intents. Performance benchmarks rely on human graders, emphasizing consumer-level adequacy over academic dominance. The Apple ML blog also mentions an Xcode code-focused model and a diffusion model for Genmoji.
Not much happened today
gemini-1.5-flashmodel gemini-pro mixtral mamba-2 phi-3-medium phi-3-small gpt-3.5-turbo-0613 llama-3-8b llama-2-70b mistral-finetune twelve-labs livekit groq openai nea nvidia lmsys mistral-ai model-performance prompt-engineering data-curation ai-safety model-benchmarking model-optimization training sequence-models state-space-models daniel-kokotajlo rohanpaul_ai _arohan_ tri_dao _albertgu _philschmid sarahcat21 hamelhusain jachiam0 willdepue teknium1
Twelve Labs raised $50m in Series A funding co-led by NEA and NVIDIA's NVentures to advance multimodal AI. Livekit secured $22m in funding. Groq announced running at 800k tokens/second. OpenAI saw a resignation from Daniel Kokotajlo. Twitter users highlighted Gemini 1.5 FlashModel for high performance at low cost and Gemini Pro ranking #2 in Japanese language tasks. Mixtral models can run up to 8x faster on NVIDIA RTX GPUs using TensorRT-LLM. Mamba-2 model architecture introduces state space duality for larger states and faster training, outperforming previous models. Phi-3 Medium (14B) and Small (7B) models benchmark near GPT-3.5-Turbo-0613 and Llama 3 8B. Prompt engineering is emphasized for unlocking LLM capabilities. Data quality is critical for model performance, with upcoming masterclasses on data curation. Discussions on AI safety include a Frontier AI lab employee letter advocating whistleblower protections and debates on aligning AI to user intent versus broader humanity interests.
Jamba: Mixture of Architectures dethrones Mixtral
jamba dbrx mixtral animatediff fastsd sdxs512-0.9 b-lora supir ai21-labs databricks together-ai hugging-face midjourney mixture-of-experts model-architecture context-windows model-optimization fine-tuning image-generation video-generation cpu-optimization style-content-separation high-resolution-upscaling
AI21 labs released Jamba, a 52B parameter MoE model with 256K context length and open weights under Apache 2.0 license, optimized for single A100 GPU performance. It features a unique blocks-and-layers architecture combining transformer and MoE layers, competing with models like Mixtral. Meanwhile, Databricks introduced DBRX, a 36B active parameter MoE model trained on 12T tokens, noted as a new standard for open LLMs. In image generation, advancements include Animatediff for video-quality image generation and FastSD CPU v1.0.0 beta 28 enabling ultra-fast image generation on CPUs. Other innovations involve style-content separation using B-LoRA and improvements in high-resolution image upscaling with SUPIR.
DBRX: Best open model (just not most efficient)
dbrx grok mixtral llama-2 mpt-7b gpt-4 databricks hugging-face mistral-ai mosaicml openai mixture-of-experts model-efficiency tokenization model-training code-generation model-architecture open-source-models benchmarking fine-tuning
Databricks Mosaic has released a new open-source model called DBRX that outperforms Grok, Mixtral, and Llama2 on evaluations while being about 2x more efficient than Llama2 and Grok. The model was trained on 12 trillion tokens using 3,000 H100 GPUs over 2 months, with an estimated compute cost of $10 million. It uses OpenAI's 100k tiktoken tokenizer and shows strong zero-shot code generation performance, even beating GPT-4 on the Humaneval benchmark. DBRX also upstreamed work to MegaBlocks open source. Despite its scale and efficiency, DBRX's performance on MMLU is only slightly better than Mixtral, raising questions about its scaling efficiency. The focus of DBRX is on enabling users to train models efficiently, with MoE training being about 2x more FLOP-efficient than dense models, achieving similar quality with nearly 4x less compute than previous MPT models. This release is part of the ongoing competition for open-source AI leadership, including models like Dolly, MPT, and Mistral. "If it activates 36B params, the model's perf should be equivalent to a 72B dense model or even 80B," says Qwen's tech lead.
Grok-1 in Bio
grok-1 mixtral miqu-70b claude-3-opus claude-3 claude-3-haiku xai mistral-ai perplexity-ai groq anthropic openai mixture-of-experts model-release model-performance benchmarking finetuning compute hardware-optimization mmlu model-architecture open-source memes sam-altman arthur-mensch daniel-han arav-srinivas francis-yao
Grok-1, a 314B parameter Mixture-of-Experts (MoE) model from xAI, has been released under an Apache 2.0 license, sparking discussions on its architecture, finetuning challenges, and performance compared to models like Mixtral and Miqu 70B. Despite its size, its MMLU benchmark performance is currently unimpressive, with expectations that Grok-2 will be more competitive. The model's weights and code are publicly available, encouraging community experimentation. Sam Altman highlighted the growing importance of compute resources, while Grok's potential deployment on Groq hardware was noted as a possible game-changer. Meanwhile, Anthropic's Claude continues to attract attention for its "spiritual" interaction experience and consistent ethical framework. The release also inspired memes and humor within the AI community.
Welcome Interconnects and OpenRouter
mistral-large miqu mixtral gpt-4 mistral-7b mistral-ai openai perplexity-ai llamaindex qwen langchain model-comparison model-optimization quantization role-playing story-writing code-clarity ai-assisted-decompilation asynchronous-processing quantum-computing encoder-based-diffusion open-source hardware-experimentation rag-systems nathan-lambert alex-atallah
Discord communities analyzed 22 guilds, 349 channels, and 12885 messages revealing active discussions on model comparisons and optimizations involving Mistral AI, Miqu, and GGUF quantized models. Highlights include comparing Mistral Large with GPT-4, focusing on cost-effectiveness and performance, and exploring quantization techniques like GPTQ and QLORA to reduce VRAM usage. Advanced applications such as role-playing, story-writing, code clarity, and AI-assisted decompilation were emphasized, alongside development of tools like an asynchronous summarization script for Mistral 7b. The intersection of quantum computing and AI was discussed, including DARPA-funded projects and encoder-based diffusion techniques for image processing. Community efforts featured new Spanish LLM announcements, hardware experimentation, and open-source initiatives, with platforms like Perplexity AI and LlamaIndex noted for innovation and integration. Speculation about Mistral AI's open-source commitment and tools like R2R for rapid RAG deployment highlighted collaborative spirit.
MetaVoice & RIP Bard
mixtral nous-mixtral-dpo miqu-70b gpt-4 llama-2-70b-instruct llama-2 llama-2-70b llama-2-70b-instruct coqui metavoice google openai thebloke text-to-speech voice-cloning longform-synthesis prompt-engineering direct-preference-optimization lora-fine-tuning transformers gpu-acceleration apple-silicon content-authenticity metadata ai-censorship open-source-ai model-comparison usability model-limitations
Coqui, a TTS startup that recently shut down, inspired a new TTS model supporting voice cloning and longform synthesis from a small startup called MetaVoice. Google discontinued the Bard brand in favor of Gemini. On TheBloke Discord, discussions focused on AI training with models like Mixtral, Nous Mixtral DPO, and Miqu 70B, comparing them to OpenAI's GPT models, and debated prompt engineering, lorebooks, and removing safety features via LoRA fine-tuning on models such as Llama2 70B instruct. Technical topics included transformer layer offloading limitations and adapting LLaMa 2 for Apple Silicon. On OpenAI Discord, DALL-E images now include C2PA metadata for content authenticity, sparking debates on AI censorship, metadata manipulation, and open-source AI models versus commercial giants like GPT-4. Users discussed GPT-4 usability, limitations, and practical applications.
Miqu confirmed to be an early Mistral-medium checkpoint
miqu-1-70b mistral-medium llama-2-70b-chat mixtral sqlcoder-70b codellama-70b bagelmistery-tour-v2 psyfighter-v2 mistral-ai hugging-face nous-research aiatmeta instruction-following sampling-methods fp16-quantization fine-tuning model-training context-length text-to-sql model-performance model-optimization intrstllrninja
Miqu, an open access model, scores 74 on MMLU and 84.5 on EQ-Bench, sparking debates about its performance compared to Mistral Medium. The CEO of Mistral confirmed these results. Discussions in the TheBloke Discord highlight Miqu's superiority in instruction-following and sampling methods like dynatemp and min-p. Developers also explore browser preferences and Discord UI themes. Role-playing with models like BagelMistery Tour v2 and Psyfighter v2 is popular, alongside technical talks on fp16 quantization of Miqu-1-70b. Training and fine-tuning tips for models like Unsloth and Mistral 7B are shared. In the Nous Research AI Discord, the Activation Beacon method is discussed for extending LLM context length from 4K to 400K tokens. SQLCoder-70B, fine-tuned on CodeLlama-70B, leads in text-to-SQL generation and is available on Hugging Face. The Miqu model also impresses with an 83.5 EQ-Bench score, fueling speculation about its capabilities.
CodeLLama 70B beats GPT4 on HumanEval
codellama miqu mistral-medium llama-2-70b aphrodite-engine mixtral flatdolphinmaid noromaid rpcal chatml mistral-7b activation-beacon eagle-7b rwkv-v5 openhermes2.5 nous-hermes-2-mixtral-8x7b-dpo imp-v1-3b bakllava moondream qwen-vl meta-ai-fair ollama nous-research mistral-ai hugging-face ai-ethics alignment gpu-optimization direct-prompt-optimization fine-tuning cuda-programming optimizer-technology quantization multimodality context-length dense-retrieval retrieval-augmented-generation multilinguality model-performance open-source code-generation classification vision
Meta AI surprised the community with the release of CodeLlama, an open-source model now available on platforms like Ollama and MLX for local use. The Miqu model sparked debate over its origins, possibly linked to Mistral Medium or a fine-tuned Llama-2-70b, alongside discussions on AI ethics and alignment risks. The Aphrodite engine showed strong performance on A6000 GPUs with specific configurations. Role-playing AI models such as Mixtral and Flatdolphinmaid faced challenges with repetitiveness, while Noromaid and Rpcal performed better, with ChatML and DPO recommended for improved responses. Learning resources like fast.ai's course were highlighted for ML/DL beginners, and fine-tuning techniques with optimizers like Paged 8bit lion and adafactor were discussed.
At Nous Research AI, the Activation Beacon project introduced a method for unlimited context length in LLMs using "global state" tokens, potentially transforming retrieval-augmented models. The Eagle-7B model, based on RWKV-v5, outperformed Mistral in benchmarks with efficiency and multilingual capabilities. OpenHermes2.5 was recommended for consumer hardware due to its quantization methods. Multimodal and domain-specific models like IMP v1-3b, Bakllava, Moondream, and Qwen-vl were explored for classification and vision-language tasks. The community emphasized centralizing AI resources for collaborative research.
1/16/2024: ArtificialAnalysis - a new model/host benchmark site
mixtral hermes-2-mixtral openchat-7b byte-mistral nous-research nvidia hugging-face summarization fine-tuning byte-level-tokenization multimodality inference-speed-optimization dataset-sharing quantization swyx gabriel_syme manojbh carsonpoole fullstack6209
Artificial Analysis launched a new models and hosts comparison site, highlighted by swyx. Nous Research AI Discord discussed innovative summarization techniques using NVIDIA 3090 and 2080ti GPUs for processing around 100k tokens, and adapting prompts for smaller models like OpenChat 7B. The availability of Hermes 2 Mixtral on Huggingface's HuggingChat was noted, alongside fine-tuning challenges with Mixtral using Axolotl. Discussions included byte-level tokenization experiments with Byte Mistral, multimodal training on COCO image bytes, and inference speed improvements using vllm and llama.cpp. Calls for transparency in data sharing and open-sourcing the Hermes 2 Mixtral dataset were emphasized, with comparisons of dpo and sft methods and quantized LLM use on M1 MacBook Pro.
1/11/2024: Mixing Experts vs Merging Models
gpt-4-turbo gpt-4-0613 mixtral deepseekmoe phixtral deepseek-ai hugging-face nous-research teenage-engineering discord mixture-of-experts model-merging fine-tuning rag security discord-tos model-performance prompt-engineering function-calling semantic-analysis data-frameworks ash_prabaker shacrw teknium 0xevil everyoneisgross ldj pramod8481 mgreg_42266 georgejrjrjr kenakafrosty
18 guilds, 277 channels, and 1342 messages were analyzed with an estimated reading time saved of 187 minutes. The community switched to GPT-4 turbo and discussed the rise of Mixture of Experts (MoE) models like Mixtral, DeepSeekMOE, and Phixtral. Model merging techniques, including naive linear interpolation and "frankenmerges" by SOLAR and Goliath, are driving new performance gains on open leaderboards. Discussions in the Nous Research AI Discord covered topics such as AI playgrounds supporting prompt and RAG parameters, security concerns about third-party cloud usage, debates on Discord bots and TOS, skepticism about Teenage Engineering's cloud LLM, and performance differences between GPT-4 0613 and GPT-4 turbo. The community also explored fine-tuning strategies involving DPO, LoRA, and safetensors, integration of RAG with API calls, semantic differences between MoE and dense LLMs, and data frameworks like llama index and SciPhi-AI's synthesizer. Issues with anomalous characters in fine-tuning were also raised.
1/9/2024: Nous Research lands $5m for Open Source AI
qlora phi-3 mixtral ollama nous-research openai rabbit-tech context-window fine-tuning synthetic-data activation-beacon transformer-architecture seed-financing real-time-voice-agents trillion-parameter-models kenakafrosty _stilic_ teknium
Nous Research announced a $5.2 million seed financing focused on Nous-Forge, aiming to embed transformer architecture into chips for powerful servers supporting real-time voice agents and trillion parameter models. Rabbit R1 launched a demo at CES with mixed reactions. OpenAI shipped the GPT store and briefly leaked an upcoming personalization feature. A new paper on Activation Beacon proposes a solution to extend LLMs' context window significantly, with code to be released on GitHub. Discussions also covered QLORA, fine-tuning, synthetic data, and custom architectures for LLMs.
1/8/2024: The Four Wars of the AI Stack
mixtral mistral nous-research openai mistral-ai hugging-face context-window distributed-models long-context hierarchical-embeddings agentic-rag fine-tuning synthetic-data oil-and-gas embedding-datasets mixture-of-experts model-comparison
The Nous Research AI Discord discussions highlighted several key topics including the use of DINO, CLIP, and CNNs in the Obsidian Project. A research paper on distributed models like DistAttention and DistKV-LLM was shared to address cloud-based LLM service challenges. Another paper titled 'Self-Extend LLM Context Window Without Tuning' argued that existing LLMs can handle long contexts inherently. The community also discussed AI models like Mixtral, favored for its 32k context window, and compared it with Mistral and Marcoroni. Other topics included hierarchical embeddings, agentic retrieval-augmented generation (RAG), synthetic data for fine-tuning, and the application of LLMs in the oil & gas industry. The launch of the AgentSearch-V1 dataset with one billion embedding vectors was also announced. The discussions covered mixture-of-experts (MoE) implementations and the performance of smaller models.
12/31/2023: Happy New Year
mistral-7b mixtral lm-studio mistral-ai hugging-face amd fine-tuning hardware-optimization vram emotional-intelligence model-deployment integration gpu-optimization software-updates
LM Studio community discussions highlight variations and optimizations in Dolphin and Mistral 7b models, focusing on hardware-software configurations and GPU vRAM impact on processing speed. Challenges with Mixtral model deployment on local machines and workarounds for downloading models from HuggingFace in restricted regions were addressed. Users explored enhancing AI's emotional intelligence and personalities through extended prompts, referencing research on emotional stimuli in large language models. The community also discussed hardware setups for budget AI compute servers, integration issues with ChromaDB and Autogen, and shared positive feedback on LM Studio's usability and UI. Celebrations for the New Year added a social touch to the guild interactions.
12/30/2023: Mega List of all LLMs
deita-v1.0 mixtral amazon-titan-text-express amazon-titan-text-lite nous-research hugging-face amazon mistral-ai local-attention computational-complexity benchmarking model-merging graded-modal-types function-calling data-contamination training-methods stella-biderman euclaise joey00072
Stella Biderman's tracking list of LLMs is highlighted, with resources shared for browsing. The Nous Research AI Discord discussed the Local Attention Flax module focusing on computational complexity, debating linear vs quadratic complexity and proposing chunking as a solution. Benchmark logs for various LLMs including Deita v1.0 with its SFT+DPO training method were shared. Discussions covered model merging, graded modal types, function calling in AI models, and data contamination issues in Mixtral. Community insights were sought on Amazon Titan Text Express and Amazon Titan Text Lite LLMs, including a unique training strategy involving bad datasets. Several GitHub repositories and projects like DRUGS, MathPile, CL-FoMo, and SplaTAM were referenced for performance and data quality evaluations.
12/28/2023: Smol Talk updates
tinyllama-1.1b mixtral tinygpt-v nous-research tyrannosaurus latex benchmarking knowledge-graphs model-finetuning tokenization decentralized-computation philosophy-of-ai multimodality vision open-source-models gary-marcus
Nous Research AI Discord discussions covered topics such as AI placement charts, ChatGPT's issues with Latex math format compatibility with Obsidian, and performance metrics of the TinyLlama 1.1B model on various benchmarks. Users shared resources including the math-centric corpus MathPile, knowledge graph building methods, and open-source large language model repositories. Technical discussions included decentralized computation feasibility for models like Mixtral, philosophical debates on AI sentience, and strategies for model finetuning and token counting. The community also discussed the Obsidian model, vision model training, and the release of the multimodal TinyGPT-V model by Tyrannosaurus. "ChatGPT not generating Latex math format compatible with Obsidian" and "optimistic about human-level AI within our lifetime" were notable quotes.
12/25/2023: Nous Hermes 2 Yi 34B for Christmas
nous-hermes-2 yi-34b nucleusx yayi-2 ferret teknim nous-research apple mixtral deepseek qwen huggingface wenge-technology quantization model-optimization throughput-metrics batch-processing parallel-decoding tensor-parallelization multimodality language-model-pretraining model-benchmarking teknium carsonpoole casper_ai pradeep1148 osanseviero metaldragon01
Teknium released Nous Hermes 2 on Yi 34B, positioning it as a top open model compared to Mixtral, DeepSeek, and Qwen. Apple introduced Ferret, a new open-source multimodal LLM. Discussions in the Nous Research AI Discord focused on AI model optimization and quantization techniques like AWQ, GPTQ, and AutoAWQ, with insights on proprietary optimization and throughput metrics. Additional highlights include the addition of NucleusX Model to transformers, a 30B model with 80 MMLU, and the YAYI 2 language model by Wenge Technology trained on 2.65 trillion tokens. "AutoAWQ outperforms vLLM up to batch size 8" was noted, and proprietary parallel decoding and tensor parallelization across GPUs were discussed for speed improvements.
12/21/2023: The State of AI (according to LangChain)
mixtral gpt-4 chatgpt bard dall-e langchain openai perplexity-ai microsoft poe model-consistency model-behavior response-quality chatgpt-usage-limitations error-handling user-experience model-comparison hallucination-detection prompt-engineering creative-ai
LangChain launched their first report based on LangSmith stats revealing top charts for mindshare. On OpenAI's Discord, users raised issues about the Mixtral model, noting inconsistencies and comparing it to Poe's Mixtral. There were reports of declining output quality and unpredictable behavior in GPT-4 and ChatGPT, with discussions on differences between Playground GPT-4 and ChatGPT GPT-4. Users also reported anomalous behavior in Bing and Bard AI models, including hallucinations and strange assertions. Various user concerns included message limits on GPT-4, response completion errors, chat lags, voice setting inaccessibility, password reset failures, 2FA issues, and subscription restrictions. Techniques for guiding GPT-4 outputs and creative uses with DALL-E were also discussed. Users highlighted financial constraints affecting subscriptions and queries about earning with ChatGPT and token costs.
12/15/2023: Mixtral-Instruct beats Gemini Pro (and matches GPT3.5)
mixtral gemini-pro gpt-3.5 gpt-4.5 gpt-4 chatgpt lmsys openai deepseek cloudflare huggingface performance context-window prompt-engineering privacy local-gpu cloud-gpu code-generation model-comparison model-usage api-errors karpathy
Thanks to a karpathy shoutout, lmsys now has enough data to rank mixtral and gemini pro. The discussion highlights the impressive performance of these state-of-the-art open-source models that can run on laptops. In the openai Discord, users compared AI tools like perplexity and chatgpt's browsing tool, favoring Perplexity for its superior data gathering, pricing, and usage limits. Interest was shown in AI's ability to convert large code files with deepseek coder recommended. Debates on privacy implications for AI advancement and challenges of running LLMs on local and cloud GPUs were prominent. Users reported issues with chatgpt including performance problems, loss of access to custom GPTs, and unauthorized access. Discussions also covered prompt engineering for large context windows and speculations about gpt-4.5 and gpt-4 future developments.
12/9/2023: The Mixtral Rush
mixtral hermes-2.5 hermes-2 mistral-yarn ultrachat discoresearch fireworks-ai hugging-face mistral-ai benchmarking gpu-requirements multi-gpu quantization gptq chain-of-thought min-p-sampling top-p-sampling model-sampling model-merging model-performance small-models reasoning-consistency temperature-sampling bjoernp the_bloke rtyax kalomaze solbus calytrix
Mixtral's weights were released without code, prompting the Disco Research community and Fireworks AI to implement it rapidly. Despite efforts, no significant benchmark improvements were reported, limiting its usefulness for local LLM usage but marking progress for the small models community. Discussions in the DiscoResearch Discord covered Mixtral's performance compared to models like Hermes 2.5 and Hermes 2, with evaluations on benchmarks such as winogrande, truthfulqa_mc2, and arc_challenge. Technical topics included GPU requirements, multi-GPU setups, and quantization via GPTQ. Benchmarking strategies like grammar-based evaluation, chain of thought (CoT), and min_p sampling were explored, alongside model sampling techniques like Min P and Top P to enhance response stability and creativity. Users also discussed GPTs' learning limitations and the adaptability of models under varying conditions, emphasizing min_p sampling's role in enabling higher temperature settings for creativity.