Frozen AI News archive

5 small news items

**OpenAI** announces that ChatGPT's voice mode is "coming soon." **Leopold Aschenbrenner** launched a 5-part AGI timelines series predicting a **trillion dollar cluster** from current AI progress. **Will Brown** released a comprehensive GenAI Handbook. **Cohere** completed a **$450 million funding round** at a **$5 billion valuation**. DeepMind research on **uncertainty quantification in LLMs** and an **xLSTM model** outperforming transformers were highlighted. Studies on the **geometry of concepts in LLMs** and methods to **eliminate matrix multiplication** for efficiency gains were shared. Discussions on **parameter-efficient fine-tuning (PEFT)** and **automated alignment of LLMs** were noted. New tools include **LangGraph** for AI agents, **LlamaIndex** with longer context windows, and **Hugging Face's** integration with **NVIDIA NIM** for Llama3. **Mistral AI** released a fine-tuning API for their models.

Canonical issue URL

AI News for 6/4/2024-6/5/2024! We checked 7 subreddits, 384 Twitters and 29 Discords (401 channels, and 3628 messages) for you. Estimated reading time saved (at 200wpm): 404 minutes.


{% if medium == 'web' %}

Table of Contents

[TOC]

{% else %}

The Table of Contents and Channel Summaries have been moved to the web version of this email: [{{ email.subject }}]({{ email_url }})!

{% endif %}


AI Twitter Recap

all recaps done by Claude 3 Opus, best of 4 runs. We are working on clustering and flow engineering with Haiku.

AI Models and Architectures

Tools and Frameworks

Datasets and Benchmarks

Applications and Use Cases

Discussions and Opinions


AI Reddit Recap

Across r/LocalLlama, r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity. Comment crawling works now but has lots to improve!

Here is a summary of the recent AI developments, organized by topic and with key details bolded and linked to relevant sources:

AI Model Releases and Capabilities

AI Outages and Concerns

AI Investments and Partnerships

AI Models and Benchmarks


AI Discord Recap

A summary of Summaries of Summaries

1. Finetuning Techniques and Model Integration:

2. Issues in Model Training and Optimization:

3. New Tools and Resources in AI:

4. Community Concerns and Collaborative Projects:

5. Security and Ethical Discussions in AI:


PART 1: High level Discord summaries

LLM Finetuning (Hamel + Dan) Discord


Unsloth AI (Daniel Han) Discord


Perplexity AI Discord


CUDA MODE Discord


HuggingFace Discord


LM Studio Discord

Troubleshooting Model Loading in LM Studio: Users faced issues with model loading due to insufficient VRAM; the proposed workaround is to disable GPU offloading. A specific case highlighted problems with loading Llama70b which was not saved as a GGUF file, and a sym link option or file conversion was recommended.

Discussions Highlight Model Performance and Compatibility: The Command R model showed suboptimal performance when offloaded to Metal, and for text enhancement, no specific model was recommended, although one should look at 13B models on the leaderboard. Additionally, difficulties with SMAUG's BPE tokenizer were reported with Llama 3 version 0.2.24.

Chatter About Workstation GPUs and Operating Systems: The ASRock Radeon RX 7900 XTX & 7900 XT Workstation GPUs sparked interest, especially due to their AI setup-oriented design. There were mixed sentiments about Linux's user-friendliness and discussions about switching to Linux due to Windows' Recall feature prompting privacy concerns.

Feedback for Bug in LM Studio: A bug in LM Studio v0.2.24 was pointed out, involving extra escape characters in preset configurations such as "input_suffix": "\\n\\nAssistant: ".

Privacy and Security: Privacy concerns were raised related to Windows' Recall feature potentially creating security vulnerabilities by amassing sensitive data. In a lighter tone, anecdotes of IT support challenges—including a computer tainted with the odor of cat urine—brought humor to the discussions on tech support woes.


OpenAI Discord


Stability.ai (Stable Diffusion) Discord


Eleuther Discord


Nous Research AI Discord


LlamaIndex Discord


LAION Discord

ChatGPT 4 Adds Acting Chops: OpenAI's ChatGPT 4 introduces impressive new voice generation features as seen in a shared video, stirring excitement with its ability to craft unique character voices.

DALLE3's Diminishing Returns: Users express concerns over a noticeable degradation in the quality of DALLE3 outputs, with disappointments echoed for both traditional usage and API integrations.

Debating the Ethics of AI Monetization: Recent discussions reveal a palpable frustration within the community over non-commercial licenses for AI models, criticizing motivations centered around financial gain and the extensive resources required for training models such as T5.

LLMs Lose their Logic: A new Open-Sci collective paper exposes the "dramatic breakdown" in reasoning exhibited by large language models, available for review here with accompanying codebase and project homepage.

WebSocket Whims: An issue with WebSockets in the WhisperSpeech service within whisperfusion pipeline prompted a detailed inquiry on StackOverflow, hoping for a resolution to unexpected closures.


Modular (Mojo 🔥) Discord

Rust Rises, Mojo Eyes New Heights: A member praised a YouTube tutorial highlighting the safety of Rust in systems development through FFI encapsulation, evidencing the engineering community's interest in secure and efficient systems programming.

Transitional Tips for Python Devs: A Python to Mojo transition guide on YouTube was lauded for compiling essential low-level computer science knowledge beneficial for non-CS engineers moving to Mojo.

Mojo's Enumeration Alternatives: While Mojo currently lacks Enum types, the conversation turned to its accommodation of Variants with a nod towards the ongoing GitHub discussion for those interested in potential developments.

Nightly Updates Stir Commotion: A new release of the Mojo compiler (2024.6.512) was announced, along with advice on managing versions in VSCode, while challenges were addressed in adapting to changes like Coroutine.__await__ becoming consuming, as shown in the changelog.

Encryption Entreats Extension: Capturing the intersection of security and programming, a user emphasized the urgency for a cryptography library in Mojo, suggesting the feature would be "fire" and underscoring the need to build robustness into the language's capabilities.


Interconnects (Nathan Lambert) Discord


Cohere Discord

Will Cohere's API Remain Free?: Members are buzzing with speculations that Cohere's free API might be discontinued, urging others to seek official confirmation and disregarding unverified rumors.

Bringing Order to Multi-User Bot Chats: Engineers discussed the challenges of engaging Language Models (LLMs) in multi-user chat threads, suggesting that tagging messages with usernames could improve clarity.

Hunting for the Ultimate Chat Component: A community member inquired about a React-based Chat component; they were pointed to the Cohere Toolkit, which isn't built on React but may contain elements such as the chatbox written in it.

React Components and Cohere Synergy: Though Cohere Toolkit lacks React components, the open-source tool positions itself as a useful resource for implementing RAG applications, potentially compatible with React implementations.


OpenAccess AI Collective (axolotl) Discord

Bug Hunt in Memory Lane: Users reported Out of Memory (OOM) errors when running a target module on 2xT4 16GB GPUs, alongside anomalous loss:0.0 readings, which could suggest a critical issue in parameter configuration or resource allocation.

Data Feast for Hungry Models: The HuggingFace FineWeb datasets, a sizeable collection sourced from CommonCrawl with 15 trillion tokens, is making waves for its potential to lower entry barriers for training large models, though concerns about computational and financial resources required to utilize it fully have been raised.

Deepspeed Dominates Model Training Chatter: Engineering discussions revealed a preference for using the command line for running Deepspeed tasks including successful fine-tuning of the Llama3 model using Deepspeed zero2 and selected Qlora over Lora for fine-tuning.

Seeking Speedy Solutions: A member vented frustration over Runpod's slow boot times, specifically that booting a 14 billion parameter model takes about a minute, impacting cost-effectiveness; questions were raised about alternative serverless providers with faster model loading capabilities.

Model Mingle and Muddle: While there is clear enthusiasm over the GLM-4 9B model, concrete feedback within the community on its performance and use cases seems scarce, suggesting either a novelty of deployment or a gap in shared user experiences.


Latent Space Discord


OpenInterpreter Discord


tinygrad (George Hotz) Discord


LangChain AI Discord

Outdated Docs Cause Commotion: LangChain and OpenAI documentation woes have caught the attention of members noting significant discrepancies due to API updates. A suggestion pointed engineers towards the primary code stack itself for the most current insights.

DB Wars: MongoDB vs. Chroma DB: When an engineer pondered the use of MongoDB for vector storage, a clarification ensued about MongoDB's purpose for storing JSON rather than embeddings, directing the inquirer to MongoDB's assistance or ChatGPT.

Verba: RAG Under the Microscope: The community took an interest in Verba, a Weaviate-powered RAG chatbot, with a request for user experiences being aired, indicating an exploration into Weaviate's retrieval augmentation capabilities.

SQL Agent Leaves Users Puzzled: Issues surfaced with the SQL agent not delivering final answers, sparking a discussion on troubleshooting this cryptic behavior in an environment that detests non-performing components.

Graph-Based Knowledge with LangChain: An engineer showcased a LangChain guide focused on constructing knowledge graphs from unstructured text, prompting inquiries on integrating LLMGraphTransformer with Ollama models, a nod to the constant pursuit of enhanced knowledge synthesis.

VisualAgents Usher in Drag-and-Drop LLM Patterns: A live demonstration via a YouTube video on using VisualAgents highlighted the creative process entailed in arranging agent flow patterns, reflecting a trend towards more intuitive interfaces in LLM chain management.


OpenRouter (Alex Atallah) Discord


MLOps @Chipro Discord


Mozilla AI Discord


DiscoResearch Discord


LLM Perf Enthusiasts AI Discord


The AI Stack Devs (Yoko Li) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The Datasette - LLM (@SimonW) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The YAIG (a16z Infra) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


PART 2: Detailed by-Channel summaries and links

{% if medium == 'web' %}

LLM Finetuning (Hamel + Dan) ▷ #general (46 messages🔥):


LLM Finetuning (Hamel + Dan) ▷ #workshop-1 (1 messages):


LLM Finetuning (Hamel + Dan) ▷ #asia-tz (1 messages):

- **erniesg discusses Hainan departure and VPN setup**: *"im actually leaving hainan on 7 june"* and adds a light-hearted comment about ensuring VPN access for coding. This suggests ongoing preparation for remote work or coding sessions.

LLM Finetuning (Hamel + Dan) ▷ #🟩-modal (30 messages🔥):


LLM Finetuning (Hamel + Dan) ▷ #learning-resources (1 messages):


LLM Finetuning (Hamel + Dan) ▷ #jarvis-labs (7 messages):


LLM Finetuning (Hamel + Dan) ▷ #hugging-face (35 messages🔥):


LLM Finetuning (Hamel + Dan) ▷ #replicate (16 messages🔥):


LLM Finetuning (Hamel + Dan) ▷ #langsmith (21 messages🔥):


LLM Finetuning (Hamel + Dan) ▷ #whitaker_napkin_math (2 messages):


LLM Finetuning (Hamel + Dan) ▷ #workshop-4 (174 messages🔥🔥):


LLM Finetuning (Hamel + Dan) ▷ #jason_improving_rag (1 messages):


LLM Finetuning (Hamel + Dan) ▷ #yang_mistral_finetuning (108 messages🔥🔥):


LLM Finetuning (Hamel + Dan) ▷ #gradio (1 messages):


LLM Finetuning (Hamel + Dan) ▷ #axolotl (21 messages🔥):


LLM Finetuning (Hamel + Dan) ▷ #zach-accelerate (11 messages🔥):


LLM Finetuning (Hamel + Dan) ▷ #wing-axolotl (1 messages):


LLM Finetuning (Hamel + Dan) ▷ #charles-modal (29 messages🔥):


LLM Finetuning (Hamel + Dan) ▷ #langchain-langsmith (3 messages):


LLM Finetuning (Hamel + Dan) ▷ #credits-questions (51 messages🔥):


LLM Finetuning (Hamel + Dan) ▷ #strien_handlingdata (158 messages🔥🔥):


LLM Finetuning (Hamel + Dan) ▷ #fireworks (28 messages🔥):


LLM Finetuning (Hamel + Dan) ▷ #braintrust (26 messages🔥):


LLM Finetuning (Hamel + Dan) ▷ #europe-tz (2 messages):


LLM Finetuning (Hamel + Dan) ▷ #predibase (8 messages🔥):


LLM Finetuning (Hamel + Dan) ▷ #career-questions-and-stories (5 messages):

- **Seasoned Developer Dives into LLMs**: A software developer with 17 years of experience shared their journey into learning about LLMs. They expressed excitement and sought advice on running their own models locally, mentioning potential fintech applications.

- **Fastbook Recommended for LLM Fundamentals**: A member recommended the [fast.ai free book on GitHub](https://github.com/fastai/fastbook) for an overview of deep learning fundamentals, especially for software engineers. They highlighted that the book has plenty of code and intuition with minimal math.

- **Community Learning for Engineers**: A user emphasized the importance of community for learning complex topics like LLMs. They shared their experience with a Romanian learning community [Baza7](https://new.baza7.ro/) that offers practical knowledge across various business functions.

LLM Finetuning (Hamel + Dan) ▷ #openpipe (3 messages):


LLM Finetuning (Hamel + Dan) ▷ #openai (51 messages🔥):

- **Optimizing LLMs guide shared**: OpenAI's startup solutions team shared a new [guide on optimizing LLMs for accuracy](https://platform.openai.com/docs/guides/optimizing-llm-accuracy), focusing on prompt engineering, RAG, fine-tuning, and determining what is sufficient for production. A YouTube [DevDay talk](https://www.youtube.com/watch?v=ahnGLM-RC1Y) was also recommended for additional insights.
- **Challenges and requests in fine-tuning**: Users highlighted the helpfulness of the [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning/when-to-use-fine-tuning) and the need for improvements in the fine-tuning process. Multiple users requested features such as a retry button, an option not to shuffle the dataset, and solutions for better tool/function calling outputs via the fine-tuning API.
- **Credits and rate limits concerns**: Users discussed issues regarding the application and expiration of OpenAI credits. Some reported having to activate billing to receive credits and others highlighted the difficulty of utilizing $500 in credits within the given 3 months due to rate limits.
- **Rate limits and API spend confusion**: Users questioned whether credits count towards API spend necessary to increase rate limits and shared insights about potentially needing to make a small payment to unlock higher rate limits sooner. Discussions continued around the possibility of OpenAI addressing this concern for a more equitable solution.
- **Availability and functionality of GPT-4 models**: Users mentioned challenges accessing GPT-4 and GPT-4o models despite having credits and speculated that access might be unlocked only after the first paid invoice. Experiences shared indicated credits apply to the balance in the billing overview page.

Unsloth AI (Daniel Han) ▷ #general (417 messages🔥🔥🔥):


Unsloth AI (Daniel Han) ▷ #announcements (1 messages):


Unsloth AI (Daniel Han) ▷ #random (6 messages):


Unsloth AI (Daniel Han) ▷ #help (220 messages🔥🔥):

- **Multi-GPU Support is Here, Multi-Node Coming Soon**: A user asked about multi-node training support for Unsloth, and was informed it's on the roadmap but not available yet. They expressed excitement about the potential for 70B finetuning with multi-GPU setups.
- **VLLM Server Setup Simplified**: A helpful discussion on setting up a VLLM server included commands and links to [installation documentation](https://docs.vllm.ai/en/stable/getting_started/installation.html). The VLLM server can act as a drop-in replacement for the OpenAI API endpoint, useful for hosting fine-tuned LLMs locally.
- **Continued Pre-Training with High Loss Issue**: A user reported high initial loss when continuing pre-training a model, despite previous successful training with low loss. They shared detailed code snippets for loading and training models with specific configurations.
- **Fine-Tuning with LoRA Adapters**: Users discussed issues with loading and continuing fine-tuning using LoRA adapters. A working solution involves creating a new PEFT model and attaching existing adapters afterward, though the wiki method still seems problematic.
- **Handling GPU Memory for Multiple Models**: A user inquired about removing models from GPU memory efficiently to run training loops overnight. Another suggested using `del` to delete model and tokenizer objects to free up GPU memory without restarting the kernel.

Perplexity AI ▷ #general (317 messages🔥🔥):


Perplexity AI ▷ #sharing (13 messages🔥):


CUDA MODE ▷ #general (6 messages):

Links mentioned:


CUDA MODE ▷ #triton (4 messages):

Link mentioned: How to get the generated CUDA code? · Issue #3726 · triton-lang/triton: no description found


CUDA MODE ▷ #pmpp-book (1 messages):

piotr.mazurek: Chapter 4, exercise 9, anyone knows if this is the corrext solution here?


CUDA MODE ▷ #youtube-recordings (1 messages):

Link mentioned: Welcome! You are invited to join a meeting: vLLM Open Office Hours (June 5, 2024). After registering, you will receive a confirmation email about joining the meeting.: As a very active contributor to the vLLM project, Neural Magic is excited to partner with the vLLM team at UC Berkeley to host bi-weekly open office hours! Come with questions to learn more about the ...


CUDA MODE ▷ #torchao (16 messages🔥):

Links mentioned:


CUDA MODE ▷ #llmdotc (245 messages🔥🔥):

Links mentioned:


CUDA MODE ▷ #bitnet (1 messages):

Link mentioned: FunctionalTensor: dispatch metadata directly to inner tensor by bdhirsh · Pull Request #127927 · pytorch/pytorch: Fixes #127374 The error in the linked repro is: AssertionError: Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode with 'allow_non_fake_inputs'. Found in aten.sym_st...


CUDA MODE ▷ #sparsity (4 messages):

Links mentioned:


HuggingFace ▷ #announcements (1 messages):

Links mentioned:


HuggingFace ▷ #general (214 messages🔥🔥):

Links mentioned:


HuggingFace ▷ #today-im-learning (2 messages):


HuggingFace ▷ #cool-finds (5 messages):

Links mentioned:


HuggingFace ▷ #i-made-this (5 messages):

Links mentioned:


HuggingFace ▷ #reading-group (2 messages):

Links mentioned:


HuggingFace ▷ #computer-vision (7 messages):

Link mentioned: Paper page - CamViG: Camera Aware Image-to-Video Generation with Multimodal Transformers: no description found


HuggingFace ▷ #NLP (2 messages):

Link mentioned: grounded-ai (GroundedAI): no description found


HuggingFace ▷ #diffusion-discussions (25 messages🔥):

Link mentioned: Wow Amazed GIF - Wow Amazed In Awe - Discover & Share GIFs: Click to view the GIF


HuggingFace ▷ #gradio-announcements (1 messages):

Links mentioned:


LM Studio ▷ #💬-general (47 messages🔥):

Links mentioned:


LM Studio ▷ #🤖-models-discussion-chat (102 messages🔥🔥):

Links mentioned:


LM Studio ▷ #🧠-feedback (1 messages):


LM Studio ▷ #🎛-hardware-discussion (90 messages🔥🔥):

Links mentioned:


LM Studio ▷ #🧪-beta-releases-chat (3 messages):


LM Studio ▷ #avx-beta (2 messages):


OpenAI ▷ #ai-discussions (185 messages🔥🔥):


OpenAI ▷ #gpt-4-discussions (5 messages):


OpenAI ▷ #prompt-engineering (13 messages🔥):


OpenAI ▷ #api-discussions (13 messages🔥):


Stability.ai (Stable Diffusion) ▷ #announcements (1 messages):

Link mentioned: Stable Audio Open — Stability AI: Stable Audio Open is an open source model optimised for generating short audio samples, sound effects and production elements using text prompts.


Stability.ai (Stable Diffusion) ▷ #general-chat (141 messages🔥🔥):

Links mentioned:


Eleuther ▷ #general (55 messages🔥🔥):

Link mentioned: Introduction - SITUATIONAL AWARENESS: The Decade Ahead: Leopold Aschenbrenner, June 2024 You can see the future first in San Francisco. Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trill...


Eleuther ▷ #research (58 messages🔥🔥):

Links mentioned:


Eleuther ▷ #lm-thunderdome (3 messages):

Link mentioned: lm-evaluation-harness/examples/lm-eval-overview.ipynb at main · EleutherAI/lm-evaluation-harness: A framework for few-shot evaluation of language models. - EleutherAI/lm-evaluation-harness


Nous Research AI ▷ #off-topic (5 messages):


Nous Research AI ▷ #interesting-links (10 messages🔥):

Links mentioned:


Nous Research AI ▷ #general (84 messages🔥🔥):

Links mentioned:


Nous Research AI ▷ #project-obsidian (3 messages):

Links mentioned:


LlamaIndex ▷ #blog (3 messages):


LlamaIndex ▷ #general (86 messages🔥🔥):

Links mentioned:


LlamaIndex ▷ #ai-discussion (2 messages):


LAION ▷ #general (67 messages🔥🔥):

Links mentioned:


LAION ▷ #research (17 messages🔥):

Links mentioned:


LAION ▷ #learning-ml (1 messages):

Link mentioned: WebSocket Closes Unexpectedly in TTS Service with Multiprocessing and Asyncio: I am developing a TTS (Text-to-Speech) service using multiprocessing and asyncio in Python. My main application integrates other components using queue. However, I'm encountering an issue whe...


Modular (Mojo 🔥) ▷ #general (10 messages🔥):

Links mentioned:


Modular (Mojo 🔥) ▷ #ai (1 messages):


Modular (Mojo 🔥) ▷ #🔥mojo (21 messages🔥):

Link mentioned: Issues · modularml/mojo: The Mojo Programming Language. Contribute to modularml/mojo development by creating an account on GitHub.


Modular (Mojo 🔥) ▷ #nightly (18 messages🔥):

Link mentioned: mojo/docs/changelog-released.md at nightly · modularml/mojo: The Mojo Programming Language. Contribute to modularml/mojo development by creating an account on GitHub.


Interconnects (Nathan Lambert) ▷ #news (1 messages):

Link mentioned: Why Investors Can't Get Enough of AI Robotics Deals Right Now : VCs are betting that robotics is one space where startups can still have an edge against OpenAI.


Interconnects (Nathan Lambert) ▷ #ml-drama (40 messages🔥):

Links mentioned:


Interconnects (Nathan Lambert) ▷ #random (6 messages):

Link mentioned: rewardbench.py results are different for different batch size for beaver-7b · Issue #137 · allenai/reward-bench: Thank you for the great work on rewardbench, as it's been super helpful in evaluating/researching reward models. I've been wrapping your rewardbench.py code to run the reward models published ...


Interconnects (Nathan Lambert) ▷ #memes (1 messages):

420gunna: 👍


Interconnects (Nathan Lambert) ▷ #posts (1 messages):

SnailBot News: <@&1216534966205284433>


Cohere ▷ #general (40 messages🔥):

Links mentioned:


OpenAccess AI Collective (axolotl) ▷ #general (9 messages🔥):

Link mentioned: HuggingFaceFW (HuggingFaceFW): no description found


OpenAccess AI Collective (axolotl) ▷ #datasets (20 messages🔥):


OpenAccess AI Collective (axolotl) ▷ #replicate-help (1 messages):


Latent Space ▷ #ai-general-chat (28 messages🔥):

Links mentioned:


Latent Space ▷ #ai-announcements (1 messages):

Link mentioned: LLM Paper Club (Anthropic's Scaling Monosemanticity) · Zoom · Luma: Vibhu will cover https://www.anthropic.com/news/mapping-mind-language-model / and…


OpenInterpreter ▷ #general (14 messages🔥):


OpenInterpreter ▷ #O1 (4 messages):


OpenInterpreter ▷ #ai-content (2 messages):

Link mentioned: GitHub - 0xrushi/Terminal-Voice-Assistant: Contribute to 0xrushi/Terminal-Voice-Assistant development by creating an account on GitHub.


tinygrad (George Hotz) ▷ #general (14 messages🔥):


tinygrad (George Hotz) ▷ #learn-tinygrad (4 messages):


LangChain AI ▷ #general (12 messages🔥):

Links mentioned:


LangChain AI ▷ #share-your-work (1 messages):

Link mentioned: Drag and Drop Agent Patterns and LLM Chains with Visual Agents: In this demo, I drag and drop an agent flow pattern onto my canvas and run it. You can easily build custom agent flows and save them as patterns to reuse lik...


OpenRouter (Alex Atallah) ▷ #general (13 messages🔥):


MLOps @Chipro ▷ #events (5 messages):

Links mentioned:


Mozilla AI ▷ #llamafile (3 messages):

Links mentioned:


DiscoResearch ▷ #discolm_german (3 messages):

Link mentioned: BUAADreamer/PaliGemma-3B-Chat-v0.2 · Hugging Face: no description found


LLM Perf Enthusiasts AI ▷ #resources (1 messages):

Link mentioned: GenAI Handbook: no description found





{% else %}

The full channel by channel breakdowns have been truncated for email.

If you want the full breakdown, please visit the web version of this email: [{{ email.subject }}]({{ email_url }})!

If you enjoyed AInews, please share with a friend! Thanks in advance!

{% endif %}