All tags
Model: "gpt-3.5-turbo"
Snowflake Arctic: Fully Open 10B+128x4B Dense-MoE Hybrid LLM
snowflake-arctic phi-3 llama-3-70b llama-3 stable-diffusion-3 sd3-turbo gpt-3.5-turbo snowflake databricks deepseek deepspeed nvidia stable-diffusion adobe apple llamaindex lmsys openai mixture-of-experts curriculum-learning model-release image-generation video-upscaling quantization inference-speed benchmarking model-comparison open-source on-device-ai
Snowflake Arctic is a notable new foundation language model released under Apache 2.0, claiming superiority over Databricks in data warehouse AI applications and adopting a mixture-of-experts architecture inspired by DeepSeekMOE and DeepSpeedMOE. The model employs a 3-stage curriculum training strategy similar to the recent Phi-3 paper. In AI image and video generation, Nvidia introduced the Align Your Steps technique improving image quality at low step counts, while Stable Diffusion 3 and SD3 Turbo models were compared for prompt understanding and image quality. Adobe launched an AI video upscaling project enhancing blurry videos to HD, though with some high-resolution artifacts. Apple released open-source on-device language models with code and training logs, diverging from typical weight-only releases. The Llama-3-70b model ties for first place on the LMSYS leaderboard for English queries, and Phi-3 (4B params) outperforms GPT-3.5 Turbo in the banana logic benchmark. Fast inference and quantization of Llama 3 models were demonstrated on MacBook devices.
OpenAI's Instruction Hierarchy for the LLM OS
phi-3-mini openelm claude-3-opus gpt-4-turbo gpt-3.5-turbo llama-3-70b rho-1 mistral-7b llama-3-8b llama-3 openai microsoft apple deepseek mistral-ai llamaindex wendys prompt-injection alignment benchmarking instruction-following context-windows model-training model-deployment inference performance-optimization ai-application career-advice drive-thru-ai
OpenAI published a paper introducing the concept of privilege levels for LLMs to address prompt injection vulnerabilities, improving defenses by 20-30%. Microsoft released the lightweight Phi-3-mini model with 4K and 128K context lengths. Apple open-sourced the OpenELM language model family with an open training and inference framework. An instruction accuracy benchmark compared 12 models, with Claude 3 Opus, GPT-4 Turbo, and Llama 3 70B performing best. The Rho-1 method enables training state-of-the-art models using only 3% of tokens, boosting models like Mistral. Wendy's deployed AI-powered drive-thru ordering, and a study found Gen Z workers prefer generative AI for career advice. Tutorials on deploying Llama 3 models on AWS EC2 highlight hardware requirements and inference server use.
Cohere Command R+, Anthropic Claude Tool Use, OpenAI Finetuning
c4ai-command-r-plus claude-3 gpt-3.5-turbo gemini mistral-7b gemma-2 claude-3-5 llama-3 vicuna cohere anthropic openai microsoft stability-ai opera-software meta-ai-fair google-deepmind mistral-ai tool-use multilingual-models rag fine-tuning quantum-computing audio-generation local-inference context-windows model-size-analysis model-comparison
Cohere launched Command R+, a 104B dense model with 128k context length focusing on RAG, tool-use, and multilingual capabilities across 10 key languages. It supports Multi-Step Tool use and offers open weights for research. Anthropic introduced tool use in beta for Claude, supporting over 250 tools with new cookbooks for practical applications. OpenAI enhanced its fine-tuning API with new upgrades and case studies from Indeed, SK Telecom, and Harvey, promoting DIY fine-tuning and custom model training. Microsoft achieved a quantum computing breakthrough with an 800x error rate improvement and the most usable qubits to date. Stability AI released Stable Audio 2.0, improving audio generation quality and control. The Opera browser added local inference support for large language models like Meta's Llama, Google's Gemma, and Vicuna. Discussions on Reddit highlighted Gemini's large context window, analysis of GPT-3.5-Turbo model size, and a battle simulation between Claude 3 and ChatGPT using local 7B models like Mistral and Gemma.
Not much happened today
jamba-v0.1 command-r gpt-3.5-turbo openchat-3.5-0106 mixtral-8x7b mistral-7b midnight-miqu-70b-v1.0.q5_k_s cohere lightblue openai mistral-ai nvidia amd hugging-face ollama rag mixture-of-experts model-architecture model-analysis debate-persuasion hardware-performance gpu-inference cpu-comparison local-llm stable-diffusion ai-art-bias
RAGFlow open sourced, a deep document understanding RAG engine with 16.3k context length and natural language instruction support. Jamba v0.1, a 52B parameter MoE model by Lightblue, released but with mixed user feedback. Command-R from Cohere available on Ollama library. Analysis of GPT-3.5-Turbo architecture reveals about 7 billion parameters and embedding size of 4096, comparable to OpenChat-3.5-0106 and Mixtral-8x7B. AI chatbots, including GPT-4, outperform humans in debates on persuasion. Mistral-7B made amusing mistakes on a math riddle. Hardware highlights include a discounted HGX H100 640GB machine with 8 H100 GPUs bought for $58k, and CPU comparisons between Epyc 9374F and Threadripper 1950X for LLM inference. GPU recommendations for local LLMs focus on VRAM and inference speed, with users testing 4090 GPU and Midnight-miqu-70b-v1.0.q5_k_s model. Stable Diffusion influences gaming habits and AI art evaluation shows bias favoring human-labeled art.
Welcome /r/LocalLlama!
cerebrum-8x7b mixtral-7b gpt-3.5-turbo gemini-pro moistral-11b-v1 claude-opus qwen-vl-chat sakana openinterpreter reddit aether-research mistral-ai nvidia lmdeploy model-merging benchmarking quantization performance-optimization deployment vision fine-tuning training-data synthetic-data rag gui
Sakana released a paper on evolutionary model merging. OpenInterpreter launched their O1 devkit. Discussions highlight Claude Haiku's underrated performance with 10-shot examples. On Reddit's IPO, AINews introduces Reddit summaries starting with /r/LocalLlama, covering upcoming subreddits like r/machinelearning and r/openai. Aether Research released Cerebrum 8x7b based on Mixtral, matching GPT-3.5 Turbo and Gemini Pro on reasoning tasks, setting a new open-source reasoning SOTA. Moistral 11B v1 finetuned model from Cream-Phi-2 creators was released. A creative writing benchmark uses Claude Opus as judge. Hobbyists explore 1.58 BitNet ternary quantization and 1-bit LLMs training. Nvidia's Blackwell (h200) chip supports FP4 precision quantization. LMDeploy v0.2.6+ enables efficient vision-language model deployment with models like Qwen-VL-Chat. Users seek GUIs for LLM APIs with plugin and RAG support. Pipelines for synthetic training data generation and fine-tuning language models for chat are discussed.
DeepMind SIMA: one AI, 9 games, 600 tasks, vision+language ONLY
llama-3 claude-3-opus claude-3 gpt-3.5-turbo deepmind cognition-labs deepgram modal-labs meta-ai-fair anthropic multimodality transformer software-engineering ai-agents ai-infrastructure training text-to-speech speech-to-text real-time-processing model-architecture benchmarking andrej-karpathy arav-srinivas francois-chollet yann-lecun soumith-chintala john-carmack
DeepMind SIMA is a generalist AI agent for 3D virtual environments evaluated on 600 tasks across 9 games using only screengrabs and natural language instructions, achieving 34% success compared to humans' 60%. The model uses a multimodal Transformer architecture. Andrej Karpathy outlines AI autonomy progression in software engineering, while Arav Srinivas praises Cognition Labs' AI agent demo. François Chollet expresses skepticism about automating software engineering fully. Yann LeCun suggests moving away from generative models and reinforcement learning towards human-level AI. Meta's Llama-3 training infrastructure with 24k H100 Cluster Pods is shared by Soumith Chintala and Yann LeCun. Deepgram's Aura offers low-latency speech APIs, and Modal Labs' Devin AI demonstrates document navigation and interaction with ComfyUI. Memes and humor circulate in the AI community.
12/18/2023: Gaslighting Mistral for fun and profit
gpt-4-turbo gpt-3.5-turbo claude-2.1 claude-instant-1 gemini-pro gpt-4.5 dalle-3 openai anthropic google-deepmind prompt-engineering api model-performance ethics role-play user-experience ai-impact-on-jobs ai-translation technical-issues sam-altman
OpenAI Discord discussions reveal comparisons among language models including GPT-4 Turbo, GPT-3.5 Turbo, Claude 2.1, Claude Instant 1, and Gemini Pro, with GPT-4 Turbo noted for user-centric explanations. Rumors about GPT-4.5 remain unconfirmed, with skepticism prevailing until official announcements. Users discuss technical challenges like slow responses and API issues, and explore role-play prompt techniques to enhance model performance. Ethical concerns about AI's impact on academia and employment are debated. Future features for Dalle 3 and a proposed new GPT model are speculated upon, while a school project seeks help using the OpenAI API. The community also touches on AI glasses and job market implications of AI adoption.
12/11/2023: Mixtral beats GPT3.5 and Llama2-70B
mixtral-8x7b gpt-4 gpt-3.5-turbo llama-3 openhermes-2.5 llava-v1.5-13b-gptq mistral-ai openai huggingface sparse-mixture-of-experts fine-tuning quantization gpu-hardware transformers model-deployment open-source coding-datasets
Mistral AI announced the Mixtral 8x7B model featuring a Sparse Mixture of Experts (SMoE) architecture, sparking discussions on its potential to rival GPT-4. The community debated GPU hardware options for training and fine-tuning transformer models, including RTX 4070s, A4500, RTX 3090s with nvlink, and A100 GPUs. Interest was expressed in fine-tuning Mixtral and generating quantized versions, alongside curating high-quality coding datasets. Resources shared include a YouTube video on open-source model deployment, an Arxiv paper, GitHub repositories, and a blog post on Mixture-of-Experts. Discussions also touched on potential open-source releases of GPT-3.5 Turbo and llama-3, and running OpenHermes 2.5 on Mac M3 Pro with VRAM considerations.