Frozen AI News archive

Not much happened today

**Twelve Labs** raised **$50m** in Series A funding co-led by NEA and **NVIDIA's NVentures** to advance multimodal AI. **Livekit** secured **$22m** in funding. **Groq** announced running at **800k tokens/second**. OpenAI saw a resignation from Daniel Kokotajlo. Twitter users highlighted **Gemini 1.5 FlashModel** for high performance at low cost and **Gemini Pro** ranking #2 in Japanese language tasks. **Mixtral** models can run up to 8x faster on NVIDIA RTX GPUs using TensorRT-LLM. **Mamba-2** model architecture introduces state space duality for larger states and faster training, outperforming previous models. **Phi-3 Medium (14B)** and **Small (7B)** models benchmark near GPT-3.5-Turbo-0613 and Llama 3 8B. Prompt engineering is emphasized for unlocking LLM capabilities. Data quality is critical for model performance, with upcoming masterclasses on data curation. Discussions on AI safety include a Frontier AI lab employee letter advocating whistleblower protections and debates on aligning AI to user intent versus broader humanity interests.

Canonical issue URL

We checked 7 subreddits, 384 Twitters and 29 Discords (400 channels, and 4568 messages) for you. Estimated reading time saved (at 200wpm): 455 minutes.

Twelve Labs raised $50m, Livekit raised $22m, Groq is now running 800tok/s, and there's an OpenAI resignation thread from Daniel Kokotajlo.

But no technical developments caught our eye.


{% if medium == 'web' %}

Table of Contents

[TOC]

{% else %}

The Table of Contents and Channel Summaries have been moved to the web version of this email: [{{ email.subject }}]({{ email_url }})!

{% endif %}


AI Twitter Recap

all recaps done by Claude 3 Opus, best of 4 runs. We are working on clustering and flow engineering with Haiku.

AI and Large Language Model Developments

Prompt Engineering and Data Curation

AI Safety and Alignment Discussions


AI Reddit Recap

Across r/LocalLlama, r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity. Comment crawling works now but has lots to improve!

TO BE COMPLETED


AI Discord Recap

A summary of Summaries of Summaries

  1. Finetuning and Optimization for LLMs:

    • Optimizing LLM Accuracy by OpenAI provides advanced techniques like prompt engineering, RAG, and guidelines on acceptable performance levels. Check out the accompanying YouTube talk for deeper learning.

    • Discussing Multimodal Finetuning, users explored Opus 4o and MiniCPM-Llama3-V-2_5 for image text parsing and OCR, and considered retrieval methods for structured datasets (Countryside Stewardship grant finder).

    • Queries about continuous pretraining and memory efficiency highlight Unsloth AI's ability to halve VRAM usage compared to standard methods, detailed in their blog and GitHub page.

  2. Model Performance and Inference Efficiency:

    • Modal impressed with 50x revenue growth and revenue exceeding eight figures while also optimizing infrastructure. Insights were shared in Erik's talk at Data Council and Modal's hiring link.

    • Discussions about bitshift operation across all backends (tinygrad) and performance adjustments (PR #4728) versus traditional operations stirred debates on improvement margins.

    • Users tackled CUDA recompile issues by realigning flags for effective compilation. They exchanged resources like the RISC-V Vector Processing YouTube video for further learning.

  3. Open-Source Developments and Community Projects:

    • LlamaIndex's integration with Google Gemini demonstrated a million-token context window facilitating complex queries, while practical problems were solved via custom solutions detailed in their documentation.

    • Modlab’s Deep Dive into Ownership in Mojo showcased detailed work by CEO Chris Lattner exploring developer-friendly innovations. Community feedback on making all functions async sparked diverse opinions on compatibility and ease of transition.

    • Projects like FineWeb from Hugging Face and the Phi-3 models climbing the @lmsysorg leaderboard highlight progress and ongoing research in open-source AI.

  4. System and Hardware Troubleshooting:

    • Members resolved several technical issues, such as infinite loops on Macbook M1 with ollama llama3 setup by troubleshooting system commands, and Async processing in LM Studio facilitated with practical discussions on GPU usage efficiency.

    • They discussed performance discrepancies in GPUs (e.g., 6800XT achieving only 30it/s) and potential improvements with proper setup and driver considerations, showcasing a blend of peer support and technical expertise.

    • Open-source solutions like IC-Light, focusing on improving image relighting, and CV-VAE for video models (ArXiv link) were enthusiastically shared among hardware and software enthusiasts.

  5. Health of AI Communities and Conferences:

    • Several platforms confirmed credit distributions to users, dealing with issues such as double credits, while fostering a supportive environment seen in community exchanges and career stories.

    • Events like Qwak's Infer: Summer '24 invite AI/ML enthusiasts for practical sessions with industry experts, further detailed in conference registration.

    • AI News newsletters faced formatting issues in ProtonMail dark mode, encouraging community-led problem-solving, and events like Torchtune seeking recognition highlighted active engagements and the importance of visibility in community contributions.


PART 1: High level Discord summaries

LLM Finetuning (Hamel + Dan) Discord

While credits fueled the AI engine rooms and technical tidbits circulated, members swapped both assistance and anecdotes—an emblem of the guild's pulsing core of collective progress and exchange.


CUDA MODE Discord

CUDA Conundrums and Triton Tips: Users discussed the tech used for generating digital humans without concluding, and sought efficient LLM training methods on multiple GPUs. Challenges were noted in Triton for indexing a tensor in shared memory, and advice on Triton and Torch was provided for those considering a switch to CUDA/Triton programming.

Torch Troubleshooting and Profiling Proficiency: Users shared on debugging NHWC tensor normalization, opening metal traces using the torch.mps.profiler, and sought to understand torch.compile along with its child function calls.

AO's Arrival and Sparsity Specs: News emerged about an Apple Metal kernel and 2:4 sparsity benchmarks contributing to PyTorch AO, sparking debates on torch.ao.quantization deprecation and discussing the efficiency of structured pruning.

Blog Browsing and Binary Banter: A mention of State Space Duality delved into on goomblog, while discussions flourished around PyTorch's uint2-7 types and custom dtype string-conversion for TrinaryTensor.

ARM's Acceleration Aspirations: Conversation revolved around the capabilities and support of ARM for Hexagon SDK and Adreno SDK, with a member sharing resources on ARM's performance and discussing its potential in GEMM implementations.


Unsloth AI (Daniel Han) Discord


Perplexity AI Discord

Wikipedia Bias in Academic Searches: A user highlighted potential issues with Perplexity's academic search capabilities, pointing out a bias towards Wikipedia over other sources like Britannica, and provided a link to the search results.

AI Services Experience Simultaneous Downtime: Reports emerged of simultaneous outages affecting Perplexity, ChatGPT, and similar AI services, spurring discussions about a larger infrastructure issue possibly connected to common providers like AWS.

The Opus 50 Limit has Users Longing for More: Users expressed dissatisfaction with the new Opus 50 limit, comparing it unfavorably to the previous Opus 600 and criticizing Perplexity's communication about it.

Perplexity vs. ChatGPT: A Duel of AI Titans: Discussions around the pros and cons of Perplexity AI Premium and ChatGPT touched on web search capabilities, model range, subscription limits, and practical use cases for both platforms.

Tech Talk Assistance for School: AI enthusiasts shared resources and advised using AI tools to assist with school presentations on AI, highlighting the need to explain both benefits and risks, along with sharing a YouTube video for technical understanding.


HuggingFace Discord


OpenAI Discord

AGI Amusement and Practical AI Tools: The Discord community pondered the essence of AGI with humorous suggestions such as flawless USB insertion skills, indicating the expectation for AGI to perform complex human-like tasks. Useful AI tools like Elicit were recommended for summarizing scientific research, with Elicit notably commended for its efficient paper summarization and synthesis.

ChatGPT Takes a Sick Day, Voice Mode Hesitation: Speculation around ChatGPT outages included backend provider issues and a potential DDoS attack by Anonymous Sudan. The rollout of new voice mode features in GPT-4o was discussed, with mixed feelings about the promised timeline and reported persistent issues such as 'bad gateway' errors and laggy Android keyboards.

The Prompt Engineering Conundrum: Challenges in prompt engineering were aired, especially the difficulty of adhering to complex guidelines, leading to calls for improved versions. WizardLM 2 was suggested as a high-performing alternative to GPT-4, and breaking down complex prompts into steps was recommended as an approach to optimize results.

API Affordability under Scrutiny: Conversations turned to the cost of using the GPT API versus ChatGPT Plus, with API potentially being the cheaper option depending on usage. Alternatives like OpenRouter and WizardLM 2 were proposed for better value, and an article titled "Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models" was endorsed as a must-read for prompt engineering insights.

Rollout Delays and Performance Puzzles: Delays in new feature rollouts and performance issues with large prompts were common concerns. To counteract the sluggish response with hefty prompts, lazy loading was mentioned as a potential solution to browser difficulties.


LM Studio Discord

LM Studio GPU Sagas: Engineers discussed the behavioral quirks of LM Studio models with an emphasis on offloading parameters, affirming that running on a dedicated GPU often yields better results than shared resources and underlined the fine line between model size restrictions and GPU memory—mentioning that models should be under 6GB to alleviate loading issues.

Model Recommendations for Codewranglers: The CodeQwen 1.5 7B and Codestral 22b models were specifically recommended for code optimization tasks, while Wavecoder Ultra was also suggested despite its obscure launch history. Additionally, the utility of platforms like Extractum.io was highlighted for filtering models based on criteria such as VRAM and quantization.

The Fine Print of AI Performance: Conversation veered into the technical details of AI limitations, noting that performance can often be limited by memory bandwidth, and members suggested targeting an 80% workload in relation to physical core count on processors. The uncertainty surrounding future Chinese language support was also brought up.

Do-It-Yourself Servers Draw Debate: Discussions around building custom homelab GPUs focused on VRAM capacity, driver support, and performance between manufacturers. Concerns were addressed regarding second-hand GPUs' reliability and members weighed pros and cons of AMD ROCm versus NVIDIA's ecosystem for stability and throughput.

Engineering a Beta Buff: In the world of software development and AI tinkering, continue.dev was lauded for local setups, particularly for supporting LM Studio configuration, while a call for testers was raised for a new AVX-only extension pack, showcasing the community's collaborative spirit and ongoing optimization endeavors.


Nous Research AI Discord


Stability.ai (Stable Diffusion) Discord


LAION Discord

SD3 Models Grapple with Grainy Results: Users highlight a spotty noise issue in SD3 2B models despite using advanced features like a 16ch VAE, with noise artifacts particularly evident in areas such as running water. Skepticism has been voiced about the current validation metrics and loss functions for SD3 models, as they are perceived to poorly indicate model performance.

Open-source Breakthrough for Video Models: The community showed enthusiasm about an Apache2 licensed video-capable CV-VAE, expected to be a valuable resource for research on latent diffusion-based video models.

Peering into Future Model Architectures: newly released research introduces the State Space Duality (SSD) framework and the cutting-edge Mamba-2 architecture, claimed to be 2-8X faster than its predecessor, contesting Transformer models in language processing tasks (arxiv paper).

Training Tactics Under Scrutiny: A preprint suggests that embeddings perturbed by slight corruption of pretraining datasets can improve diffusion models' image quality (arxiv preprint), while others mention using dropout and data augmentation to prevent overfitting in large diffusion models, and a debate on whether adding training data difficulty can enhance model robustness.

Aesthetic Assessments and Realism Rivalries: Comparisons between SD3 images and Google's realistic examples have sparked discussions, with SD3 images being humorously likened to "women suffering a bad botox injection" (Reddit examples), and Google's work earning praise for its textured cloth and consistent hair representations (Google demo).


Eleuther Discord


OpenRouter (Alex Atallah) Discord


Modular (Mojo 🔥) Discord


LangChain AI Discord


tinygrad (George Hotz) Discord


LlamaIndex Discord


Interconnects (Nathan Lambert) Discord


Latent Space Discord


OpenAccess AI Collective (axolotl) Discord


Cohere Discord

Artificial Ivan Troubleshoots Like a Pro: Cohere has advanced its "Artificial Ivan" to version 4.0, enabling it to troubleshoot code and sharing an affirmations app tied to his developments.

Real Ivan's Easing into Early Retirement?: One user's quip about a human counterpart, the "real Ivan," potentially retiring at 35 due to Artificial Ivan's accomplishments, brought a humorous spin to the project's success.

Cross-Project Synergy Unlocked: A user highlighted the integration of Aya 23 with Llama.cpp and LangChain, offering a sample code and seeking assistance to implement a stopping condition in conversations using "\n".

Seeking Bilingual AI Conciseness: Detailing code aiming to produce concise, Spanish-language responses, the user outlined the use of prompts for conversation memory and parameters to enhance Aya 23's performance.

Cohere's Community Corner: Contrasting with Langchain, a playful comment from a guild member described the Cohere Discord as a "chronically online AI lab," pointing to lively interaction and engagement among its members.


Mozilla AI Discord


OpenInterpreter Discord


YAIG (a16z Infra) Discord


MLOps @Chipro Discord


DiscoResearch Discord


The LLM Perf Enthusiasts AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The AI Stack Devs (Yoko Li) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The Datasette - LLM (@SimonW) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


PART 2: Detailed by-Channel summaries and links

{% if medium == 'web' %}

LLM Finetuning (Hamel + Dan) ▷ #general (16 messages🔥):

Links mentioned:


LLM Finetuning (Hamel + Dan) ▷ #workshop-1 (2 messages):


LLM Finetuning (Hamel + Dan) ▷ #🟩-modal (11 messages🔥):

Links mentioned:


LLM Finetuning (Hamel + Dan) ▷ #jarvis-labs (4 messages):


LLM Finetuning (Hamel + Dan) ▷ #hugging-face (130 messages🔥🔥):

Links mentioned:


LLM Finetuning (Hamel + Dan) ▷ #replicate (11 messages🔥):


LLM Finetuning (Hamel + Dan) ▷ #langsmith (4 messages):


LLM Finetuning (Hamel + Dan) ▷ #berryman_prompt_workshop (2 messages):


LLM Finetuning (Hamel + Dan) ▷ #workshop-2 (7 messages):

Link mentioned: Countryside Stewardship grant finder: Find information about Countryside Stewardship options, capital items and supplements


LLM Finetuning (Hamel + Dan) ▷ #workshop-4 (361 messages🔥🔥):

Links mentioned:


LLM Finetuning (Hamel + Dan) ▷ #axolotl (1 messages):

rumbleftw: Anyone with experience on finetuning instruct models with prompt template?


LLM Finetuning (Hamel + Dan) ▷ #zach-accelerate (5 messages):


LLM Finetuning (Hamel + Dan) ▷ #wing-axolotl (2 messages):


LLM Finetuning (Hamel + Dan) ▷ #charles-modal (19 messages🔥):

Links mentioned:


LLM Finetuning (Hamel + Dan) ▷ #credits-questions (22 messages🔥):


LLM Finetuning (Hamel + Dan) ▷ #braintrust (17 messages🔥):

Links mentioned:


LLM Finetuning (Hamel + Dan) ▷ #europe-tz (3 messages):


LLM Finetuning (Hamel + Dan) ▷ #announcements (1 messages):


LLM Finetuning (Hamel + Dan) ▷ #predibase (6 messages):


LLM Finetuning (Hamel + Dan) ▷ #career-questions-and-stories (7 messages):


LLM Finetuning (Hamel + Dan) ▷ #openai (5 messages):

Link mentioned: A Survey of Techniques for Maximizing LLM Performance: Join us for a comprehensive survey of techniques designed to unlock the full potential of Language Model Models (LLMs). Explore strategies such as fine-tunin...


CUDA MODE ▷ #general (2 messages):


CUDA MODE ▷ #triton (1 messages):


CUDA MODE ▷ #torch (3 messages):

Links mentioned:


CUDA MODE ▷ #jobs (7 messages):


CUDA MODE ▷ #beginner (11 messages🔥):

Link mentioned: TorchDynamo APIs for fine-grained tracing — PyTorch 2.3 documentation: no description found


CUDA MODE ▷ #torchao (15 messages🔥):

Links mentioned:


CUDA MODE ▷ #off-topic (4 messages):

Link mentioned: blog | Goomba Lab : no description found


CUDA MODE ▷ #irl-meetup (1 messages):


CUDA MODE ▷ #triton-viz (3 messages):


CUDA MODE ▷ #llmdotc (488 messages🔥🔥🔥):

Links mentioned:


CUDA MODE ▷ #bitnet (39 messages🔥):

Links mentioned:


CUDA MODE ▷ #arm (22 messages🔥):

Links mentioned:


Unsloth AI (Daniel Han) ▷ #general (427 messages🔥🔥🔥):

Links mentioned:


Unsloth AI (Daniel Han) ▷ #announcements (1 messages):

Links mentioned:


Unsloth AI (Daniel Han) ▷ #random (18 messages🔥):

Link mentioned: rubend18/ChatGPT-Jailbreak-Prompts · Datasets at Hugging Face: no description found


Unsloth AI (Daniel Han) ▷ #help (141 messages🔥🔥):

Links mentioned:


Unsloth AI (Daniel Han) ▷ #community-collaboration (3 messages):


Perplexity AI ▷ #general (556 messages🔥🔥🔥):

Links mentioned:


Perplexity AI ▷ #sharing (11 messages🔥):


HuggingFace ▷ #announcements (1 messages):

Links mentioned:


HuggingFace ▷ #general (418 messages🔥🔥🔥):

Links mentioned:


HuggingFace ▷ #today-im-learning (1 messages):

asuka_minato: yes,origin is in torch and deployed on colab。but need on edge inference,so riir


HuggingFace ▷ #cool-finds (4 messages):

Links mentioned:


HuggingFace ▷ #i-made-this (5 messages):

Links mentioned:


HuggingFace ▷ #core-announcements (1 messages):

Link mentioned: Releases · huggingface/diffusers: 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX. - huggingface/diffusers


HuggingFace ▷ #computer-vision (11 messages🔥):

Links mentioned:


HuggingFace ▷ #NLP (1 messages):


HuggingFace ▷ #diffusion-discussions (2 messages):

Links mentioned:


OpenAI ▷ #ai-discussions (136 messages🔥🔥):

Link mentioned: Elicit: The AI Research Assistant: Use AI to search, summarize, extract data from, and chat with over 125 million papers. Used by over 2 million researchers in academia and industry.


OpenAI ▷ #gpt-4-discussions (26 messages🔥):

Link mentioned: OpenAI Status: no description found


OpenAI ▷ #prompt-engineering (81 messages🔥🔥):


OpenAI ▷ #api-discussions (81 messages🔥🔥):


LM Studio ▷ #💬-general (56 messages🔥🔥):

Link mentioned: Tf2engineer Imposter GIF - Tf2Engineer Imposter It Could Be You - Discover & Share GIFs: Click to view the GIF


LM Studio ▷ #🤖-models-discussion-chat (69 messages🔥🔥):

Links mentioned:


LM Studio ▷ #⚙-configs-discussion (1 messages):


LM Studio ▷ #🎛-hardware-discussion (129 messages🔥🔥):

Links mentioned:


LM Studio ▷ #🧪-beta-releases-chat (2 messages):


LM Studio ▷ #autogen (1 messages):


LM Studio ▷ #avx-beta (3 messages):


LM Studio ▷ #🛠-dev-chat (1 messages):


Nous Research AI ▷ #off-topic (12 messages🔥):

Links mentioned:


Nous Research AI ▷ #interesting-links (1 messages):

sidfeels: https://laion.ai/notes/open-gpt-4-o/


Nous Research AI ▷ #general (198 messages🔥🔥):

Links mentioned:


Nous Research AI ▷ #ask-about-llms (1 messages):

vorpal_strikes: cant wait for The Emergence of Nous World in 2030


Nous Research AI ▷ #world-sim (4 messages):

Links mentioned:


Stability.ai (Stable Diffusion) ▷ #general-chat (159 messages🔥🔥):

Links mentioned:

Se trata de Emily Pellegrini, quien ha ganado millas de seguidores en las últimas semanas al crear un perfil en una plataforma para adultos, desafiando al conocido Onlyfans. En las redes sociales, la propagación de influencers generados con Inteligencia Artificial (IA) está en aumento. Ahora, una modelo que vende contenido para adultos se está volviendo viral en Instagram: Emily Pellegrini, de 21 años, ha acumulado millas de seguidores en la popular plataforma.

La influencer ha generado ingresos cercanos a los 10 mil dólares en tan solo seis semanas gracias al contenido que vende a través de Fanvue, una plataforma para adultos que compite con Onlyfans. A pesar de "vivir" en Italia, la peculiaridad de Emily Pellegrini es que todas sus imágenes son creadas mediante IA, un método que ha ganado popularidad en los últimos meses en internet.

La cuota de suscripción mensual es de aproximadamente nueve dólares, y ya cuenta con cerca de 100 suscriptores regulares. La popularidad de Emily Pellegrini ha alcanzado tal nivel que, en apenas seis semanas, ha acumulado casi 90 mil seguidores en Instagram. Incluso en su perfil de redes sociales, tiene una sección de historias destacadas donde aparece con "amigas", que son otros modelos de la plataforma creadas con IA.

La fama de Emily Pellegrini ha llegado tan lejos que el reconocido medio internacional New York Post le dedicó un artículo tanto a ella como a la plataforma Fanvue. Esta plataforma es similar a Onlyfans, pero con la particularidad de que todos los modelos son exclusivamente creados con IA.

El fundador del sitio, Will Monange, sostiene que la IA es una "herramienta" y una "extensión de lo que somos y lo que hacemos", una diferencia de lo que otras personas consideran como un reemplazo de la creatividad humana.": 1,779 likes, 13 comments - tucumandigital on January 2, 2024: "#INCREÍBLE 🔴 LA MODELO QUE LA ROMPE EN UNA PLATAFORMA PARA ADULTOS PERO NO ES REAL 😮 Se trata de Emily Pellegr...GitHub - jaisidhsingh/CoN-CLIP: Contribute to jaisidhsingh/CoN-CLIP development by creating an account on GitHub.ForRealXL - v0.5 | Stable Diffusion Checkpoint | Civitai: Hello ♥ for whatever reason you want to show me appreciation, you can: ❤️ Ko-Fi ❤️ This is an experimental Checkpoint because its my first. Special T...GitHub - chaiNNer-org/chaiNNer: A node-based image processing GUI aimed at making chaining image processing tasks easy and customizable. Born as an AI upscaling application, chaiNNer has grown into an extremely flexible and powerful programmatic image processing application.: A node-based image processing GUI aimed at making chaining image processing tasks easy and customizable. Born as an AI upscaling application, chaiNNer has grown into an extremely flexible and power...Model Database - Upscale Wiki: no description found


LAION ▷ #general (107 messages🔥🔥):

Links mentioned:


LAION ▷ #research (17 messages🔥):

Links mentioned:


Eleuther ▷ #general (68 messages🔥🔥):

Links mentioned:


Eleuther ▷ #research (46 messages🔥):

Links mentioned:


Eleuther ▷ #lm-thunderdome (9 messages🔥):

Link mentioned: add arc_challenge_mt by jonabur · Pull Request #1900 · EleutherAI/lm-evaluation-harness: This PR adds tasks for machine-translated versions of arc challenge for 11 languages. We will also be adding more languages in the future.


OpenRouter (Alex Atallah) ▷ #app-showcase (1 messages):

merfippio: I know this is really late, but did you find the FE help you were looking for?


OpenRouter (Alex Atallah) ▷ #general (106 messages🔥🔥):

Links mentioned:


Modular (Mojo 🔥) ▷ #general (26 messages🔥):

Links mentioned:


Modular (Mojo 🔥) ▷ #💬︱twitter (2 messages):


Modular (Mojo 🔥) ▷ #✍︱blog (1 messages):

Link mentioned: Modular: Deep Dive into Ownership in Mojo: We are building a next-generation AI developer platform for the world. Check out our latest post: Deep Dive into Ownership in Mojo


Modular (Mojo 🔥) ▷ #ai (1 messages):

melodyogonna: I don't think cryptography libraries has been added to the stdlib yet


Modular (Mojo 🔥) ▷ #tech-news (1 messages):

Link mentioned: [POCL'24] Concurrent Mutation must go: [POCL'24] Concurrent Mutation must goMatthew J. Parkinson, Sylvan Clebsch, Tobias Wrigstad, Sophia Drossopoulou, Elias Castegren, Ellen Arvidsson, Luke Chees...


Modular (Mojo 🔥) ▷ #🔥mojo (55 messages🔥🔥):

Links mentioned:


Modular (Mojo 🔥) ▷ #performance-and-benchmarks (1 messages):


Modular (Mojo 🔥) ▷ #nightly (18 messages🔥):

Links mentioned:


LangChain AI ▷ #general (50 messages🔥):

Links mentioned:


LangChain AI ▷ #langserve (1 messages):


LangChain AI ▷ #langchain-templates (4 messages):


LangChain AI ▷ #share-your-work (7 messages):

Links mentioned:


LangChain AI ▷ #tutorials (2 messages):

Links mentioned:


tinygrad (George Hotz) ▷ #general (43 messages🔥):

Links mentioned:


tinygrad (George Hotz) ▷ #learn-tinygrad (13 messages🔥):


LlamaIndex ▷ #blog (2 messages):


LlamaIndex ▷ #general (43 messages🔥):

Links mentioned:


Interconnects (Nathan Lambert) ▷ #news (8 messages🔥):

Link mentioned: Tweet from Philipp Schmid (@_philschmid): Phi-3 Medium (14B) and Small (7B) models are on the @lmsysorg leaderboard! 😍 Medium ranks near GPT-3.5-Turbo-0613, but behind Llama 3 8B. Phi-3 Small is close to Llama-2-70B, and Mistral fine-tunes. ...


Interconnects (Nathan Lambert) ▷ #ml-drama (29 messages🔥):

Links mentioned:


Interconnects (Nathan Lambert) ▷ #random (6 messages):


Interconnects (Nathan Lambert) ▷ #memes (1 messages):

420gunna: 👍


Latent Space ▷ #ai-general-chat (41 messages🔥):

Links mentioned:


OpenAccess AI Collective (axolotl) ▷ #general (17 messages🔥):

Link mentioned: Tweet from Xin Eric Wang (@xwang_lk): Can we really trust AI in critical areas like medical image diagnosis? No, and they are even worse than random. Our latest study, "Worse than Random? An Embarrassingly Simple Probing Evaluation of...


OpenAccess AI Collective (axolotl) ▷ #axolotl-help-bot (13 messages🔥):

Links mentioned:


Cohere ▷ #general (13 messages🔥):

Link mentioned: no title found: no description found


Cohere ▷ #project-sharing (1 messages):


Mozilla AI ▷ #llamafile (9 messages🔥):

Links mentioned:


OpenInterpreter ▷ #general (7 messages):

Link mentioned: Spacetop - Meet The AR Laptop for Work: Discover Spacetop, the AR laptop for work. Redefine mobile computing. Experience AR like never before with unmatched performance and cutting-edge tech.


OpenInterpreter ▷ #O1 (1 messages):


MLOps @Chipro ▷ #events (1 messages):

Russ Wilcox Hudson Buzby

Link mentioned: Infer Summer ‘24 by Qwak | The Engineering Behind AI and ML: Infer Summer ‘24 by Qwak brings AI leaders to share how the world’s leading companies use ML and AI in production. Join live on Jun 26, 2024, 11:00 AM EDT


DiscoResearch ▷ #general (1 messages):





{% else %}

The full channel by channel breakdowns have been truncated for email.

If you want the full breakdown, please visit the web version of this email: [{{ email.subject }}]({{ email_url }})!

If you enjoyed AInews, please share with a friend! Thanks in advance!

{% endif %}