Frozen AI News archive

Shall I compare thee to a Sonnet's day?

**Claude 3.5 Sonnet** from **Anthropic** achieves top rankings in coding and hard prompt arenas, surpassing **GPT-4o** and competing with **Gemini 1.5 Pro** at lower cost. **Glif** demonstrates a fully automated **Wojak meme generator** using Claude 3.5 for JSON generation and ComfyUI for images, showcasing new JSON extractor capabilities. **Artifacts** enables rapid creation of niche apps, exemplified by a dual monitor visualizer made in under 5 minutes. **François Chollet** highlights that fusion energy is not a near-term solution compared to existing nuclear fission plants. **Mustafa Suleyman** notes that 75% of desk workers now use AI, marking a shift toward AI-assisted productivity.

Canonical issue URL

AI News for 6/24/2024-6/25/2024. We checked 7 subreddits, 384 Twitters and 30 Discords (415 channels, and 2614 messages) for you. Estimated reading time saved (at 200wpm): 260 minutes. You can now tag @smol_ai for AINews discussions!

image.png

In realms of code, Claude Sonnet ascends,

A digital bard in silicon attire.

Through Hard Prompts' maze, its prowess transcends,

Yet skeptics question its confident fire.

LMSYS crowns it silver, not far from gold,

Its robust mind tackles tasks with grace.

But whispers of doubt, like shadows, unfold:

Can Anthropic's child truly keep this pace?

In Glif's domain, it births Wojak dreams,

A meme-smith working at lightning speed.

Five minutes craft what impossible seems,

JSON's extraction, a powerful deed.


{% if medium == 'web' %}

Table of Contents

[TOC]

{% else %}

The Table of Contents and Channel Summaries have been moved to the web version of this email: [{{ email.subject }}]({{ email_url }})!

{% endif %}


AI Twitter Recap

all recaps done by Claude 3 Opus, best of 4 runs. We are working on clustering and flow engineering with Haiku.

Claude 3.5 Sonnet from Anthropic

Glif and Wojak Meme Generator

Artifacts and Niche App Creation

Fusion Energy and Nuclear Fission

AI Adoption and Productivity

Together Mixture-of-Agents (MoA)

Retrieval Augmented Generation (RAG) Fine-Tuning

Extending LLM Context Windows

Many-Shot In-Context Learning

Miscellaneous


AI Reddit Recap

Across r/LocalLlama, r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity. Comment crawling works now but has lots to improve!

AI Developments and Advancements

AI Models, Frameworks, and Benchmarks

AI Ethics, Regulation, and Societal Impact

AI Applications and Use Cases

AI Research and Development

Miscellaneous


AI Discord Recap

A summary of Summaries of Summaries

Claude 3 Sonnet

1. LLM Advancements and Benchmarking

2. Optimizing LLM Inference and Training

3. Open-Source AI Frameworks and Community Efforts

4. Multimodal AI and Generative Modeling Innovations

Claude 3.5 Sonnet

  1. New LLMs Shake Up the Leaderboards:

    • The Replete-Coder-Llama3-8B model has gained attention across multiple discords for its proficiency in over 100 programming languages and advanced coding capabilities.

    • DeepSeek-V2 with 236B parameters and Hathor_Fractionate-L3-8B-v.05 were discussed for their performance in various tasks.

    • Skepticism about benchmarks was a common theme, with users emphasizing the need for real-world testing over leaderboard rankings.

  2. Open-Source Tools Empower AI Developers:

    • Axolotl gained traction for supporting diverse dataset formats in LLM training.

    • LlamaIndex was highlighted for its integration with DSPy, enhancing RAG capabilities.

    • The release of llamafile v0.8.7 brought faster quant operations and bug fixes, with hints at potential Android compatibility.

  3. Optimization Techniques Push LLM Boundaries:

    • The Adam-mini optimizer sparked discussions across discords for its ability to reduce memory usage by 45-50% compared to AdamW.

    • Sohu's AI chip claims to process 500,000 tokens per second with Llama 70B, though the community expressed skepticism about these performance metrics.

  4. AI Ethics and Security Take Center Stage:

Claude 3 Opus

1. New LLM Releases and Benchmarking

2. Optimizing LLM Performance and Efficiency

3. Open-Source AI Frameworks and Collaborations

4. Multimodal AI and Generative Models

GPT4O (gpt-4o-2024-05-13)

  1. Performance Improvements and Technical Fixes:

    • PyTorch Tensor Alignment Issue Gets Attention: Users discussed aligning PyTorch tensors for efficient memory usage and referenced code and documentation to address issues like torch.ops.aten._weight_int4pack_mm source code.
    • LangChain Enhancements: Members praised LangChain Zep integration, which provides persistent AI memory, summarizing conversations for effective long-term use.
    • LazyBuffer Bug Identified in Tinygrad: A problem with 'LazyBuffer' not having an attribute 'srcs' in Tinygrad was documented, suggesting fixes like .contiguous() and using Docker for CI debug Dockerfile here.
  2. Ethical and Legal Challenges in AI:

    • AI Music Generators Sued for Copyright Infringement: Major record companies are suing Suno and Udio for unauthorized training on copyrighted music, raising questions on ethical AI training practices Music Business Worldwide report.
    • Carlini Defends His Attack Research: Nicholas Carlini defended his research on AI model attacks, stating that they highlight crucial AI model vulnerabilities blog post.
    • Probllama's Security Breach: Rabbithole's security disclosure revealed critical vulnerabilities due to hardcoded API keys, potentially enabling widespread misuse across services like ElevenLabs and Google Maps full disclosure.
  3. New Releases and AI Model Innovations:

    • EvolutionaryScale’s Breakthrough with ESM3: The ESM3 model simulates 500M years of evolution, earning $142M in funding and aiming to achieve new heights in programming biology funding announcement.
    • Gradio's New Feature Set: The latest release, Gradio v4.37, introduced a revamped chatbot UI, dynamic plots, and GIF support, alongside performance improvements for a better user experience changelog.
    • Rising AI Models on OpenRouter: New AI models like AI21's Jamba Instruct and NVIDIA's Nemotron-4 340B were added to the platform, integrating diverse capabilities for various applications.
  4. Dataset Management and Optimization:

    • Addressing RAM Issues in Dataset Loading: Techniques like using save_to_disk, load_from_disk, and enabling streaming=True were discussed to mitigate memory issues when handling large datasets in AI models.
    • Minhash Optimization Performance Boost: A member boasted a 12x performance improvement for minhash calculations using Python, sparking interest and collaboration for further optimization GitHub link.
  5. Conferences, Events, and Community Engagement:

    • AI Engineer World's Fair Highlights: Excitement builds as engineers anticipate the AI Engineer World's Fair with keynotes and engaging talks, including insights from the LlamaIndex team event details.
    • Detecting Bots and Fraud in LLMs: An event on June 27 will feature Unmesh Kurup from hCaptcha discussing strategies to counteract LLM-based bots and fraud detection in modern AI security event registration.
    • OpenAI's ChatGPT Desktop App for macOS: The new app allows macOS users to access ChatGPT with enhanced features, marking a significant step in AI usability and integration ChatGPT for macOS.

PART 1: High level Discord summaries

HuggingFace Discord


CUDA MODE Discord


Unsloth AI (Daniel Han) Discord


Perplexity AI Discord


LM Studio Discord

RTX 3090 Can't Handle the Heat: Users express frustration with an RTX 3090 eGPU setup failing to load larger models like Command R (34b) Q4_K_S, leading to suggestions for exl2 format utilization for improved VRAM use, despite a noted scarcity of tools and GUI options for exl2.

Confusion Cleared on Different Llama Flavors: Clarification was provided for Llama 3 model variants: the unlabeled Llama 3 8B is the base model, set apart from the Llama 3 8B text and Llama 8B Instruct, which are finetuned for specific tasks.

Model Marvels and Mishaps: Praise was given for Hathor_Fractionate-L3-8B-v.05's creativity and Replete-Coder-Llama3-8B's coding proficiency, while DeepSeek Coder V2 was flagged for high VRAM demands, and New Dawn 70b was applauded for its role-play capabilities with contexts up to 32k.

Tech Support Troubles: Issues surfaced with Ubuntu 22.04 network errors in LM Studio, with possible remedies like disabling IPv6, and it was noted that LM Studio does not currently support Lora adapters or image generation.

Hardware Banter and Bottlenecks: A humorous exchange highlighted the chasm between the affordability of high-performance GPUs and their necessity for advanced AI work, with older rigs mockingly deemed as belonging to "the 1800s".


LAION Discord


OpenAI Discord

ChatGPT App Lands on macOS: The ChatGPT desktop app is now available for macOS, offering streamlined access via Option + Space shortcut and enhanced features for chatting about emails, screenshots, and on-screen content. Check it out at ChatGPT for macOS.

Animated Discussion Over Token Size: Engineers debated token context window sizes across models like ChatGPT4, with ChatGPT4 offering 32,000 tokens for Plus users and 8,000 tokens for free users, while other models like Gemini or Claude provide larger capacities, with Claude reaching 200k tokens.

Custom GPT Misconceptions Cleared: Members clarified the differences between CustomGPT's document attachment feature and actual model training. CustomGPT doesn't offer persistent memory across chats but rather augments the model's knowledge with external documents.

GPT Struggles Reported: Discord users reported issues with GPT’s handling of large documents and the provision of incorrect information from uploaded files, along with performance hiccups and JSON output difficulties, highlighting a need for better handling of complex queries and outputs.

AI Chips and Evolutionary Breakthroughs: Shared excitement emerged around EvolutionaryScale’s ESM3, simulatively reproducing 500M years of biological evolution, and Sohu’s AI chip, capable of outperforming current GPUs in running transformer models.


Stability.ai (Stable Diffusion) Discord


Nous Research AI Discord


OpenRouter (Alex Atallah) Discord


Latent Space Discord


LlamaIndex Discord


Modular (Mojo 🔥) Discord

Git Logs for Efficient Changelog Peeking: Engineers discovered that using "git log -S" allows for searching history of specific code changes, valuable when navigating the Mojo changelog, especially since documentation rebuilds eliminate searchable history older than three months.

Mojo and MAX Interconnected Potential: Discussions indicated that while Mojo currently may not support easy simultaneous use with Torch, a future integration aims to harness both Python and C++ capabilities. Additionally, for AI model serving, the MAX graph API serde is in development, promising future support for custom AI models with frameworks like Triton.

MAX 24.4 Embraces MacOS and Local AI: With the release of MAX 24.4, MacOS users can now leverage the toolchain for building and deploying Generative AI pipelines, introducing support for local models like Llama3 and native quantization.

SIMD & Vectorization Hot Topics for Mojo: Engineers are examining SIMD and vectorization within Mojo, where hand-rolled SIMD, LLVM's loop vectorizer status, and features like SVE support surface as critical considerations. These discussions spurred recommendations to submit features or PRs for better alignment to SIMD standards.

Nightly Compiler Updates Drive Mojo Optimizations: Issues and enhancements are flowing with Mojo nightly versions 2024.6.2505 and 2024.6.2516, where performance gains via list autodereferencing and better reference handling in dictionaries are emphasized. Troubleshooting highlights include compile-time boolean expression dealing, with reference to specific commits.


Eleuther Discord


Interconnects (Nathan Lambert) Discord


OpenInterpreter Discord

Llama3-8B Coder AI Shakes Up the Community: The Replete-Coder-Llama3-8B model has impressed engineers with its proficiency in over 100 languages and advanced coding capabilities, though it's not tailored for vision tasks.

Technical Triumphs Tangled With Quirks: Engineers found success using claude-3-5-sonnet-20240620 for code executions after troubleshooting flags, but compatibility and function support issues point to the need for refined model configurations.

Vision Feature Frustration Persists: Despite concerted efforts, users like daniel_farinax struggle with sluggish processing times and CUDA memory errors when employing vision capabilities locally, spotlighting the cost and complexity of emulating OpenAI's vision functions.

Limited Local Vision Functionality Sparks Debate: Users attempt to activate vision features such as --local --vision with minimal success, revealing a gap in Llama3's capabilities and the desire for more accessible and efficient local vision task execution.

Single AI Content Sidenote: A lone remark about the unsettling nature of AI-generated videos suggests an underlying concern for user m.0861, though not expanded into a broader discussion within the engineering community.


LangChain AI Discord


Cohere Discord


tinygrad (George Hotz) Discord


OpenAccess AI Collective (axolotl) Discord

These were the highlights within the OpenAccess AI Collective that captured the guild's most significant discussions and technical interests.


Torchtune Discord


LLM Finetuning (Hamel + Dan) Discord


Mozilla AI Discord


AI Stack Devs (Yoko Li) Discord


MLOps @Chipro Discord


The LLM Perf Enthusiasts AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The Datasette - LLM (@SimonW) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The DiscoResearch Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The YAIG (a16z Infra) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


PART 2: Detailed by-Channel summaries and links

{% if medium == 'web' %}

HuggingFace ▷ #announcements (1 messages):

- **Argilla 2.0 boosts dataset annotation**: [Argilla 2.0](https://x.com/argilla_io/status/1805250218184560772) announced with new Python SDK for dataset integration and a flexible UI for data annotation. The update promises to "create high-quality datasets more efficiently."
- **Microsoft's Florence models crush benchmarks**: Microsoft released [Florence](https://x.com/osanseviero/status/1803324863492350208), a vision model for tasks like captioning and OCR with models sized 200M and 800M, MIT-licensed. "*Fine-tune Florence-2 on any task*" with a new [notebook and walkthrough](https://x.com/mervenoyann/status/1805265940134654424) on DocVQA dataset.
- **Generate GGUF quants in seconds**: New [support added](https://x.com/reach_vb/status/1804615756568748537) for "Generate GGUF quants in less than 120 seconds" including automatic uploads to the hub and support for private and org repos. Over 3500 model checkpoints created.
- **Embedding models guide for AWS**: A comprehensive guide on how to [train and deploy embedding models](https://www.philschmid.de/sagemaker-train-deploy-embedding-models) on AWS SageMaker using Sentence Transformers and fine-tuning the BGE model for financial data. Training takes ~10 minutes on a ml.g5.xlarge instance at around $0.2.
- **Ethics and Society newsletter on data quality**: The latest [Ethics and Society newsletter](https://huggingface.co/blog/ethics-soc-6) highlights the importance of data quality. Collaboration with the ethics regulars led to a detailed discussion on this crucial theme.

Links mentioned:


HuggingFace ▷ #general (436 messages🔥🔥🔥):

Links mentioned:


HuggingFace ▷ #today-im-learning (2 messages):

- **Challenges with Langchain Pydantic and LLM**: A member is trying to use **Langchain Pydantic Basemodel** to structure document data into JSON with additional insights. They are facing issues as the LLM misinterprets the data due to tabular structures and seek evaluation strategies or better methods.

- **Expression of Interest in the Topic**: Another member indicated their interest in the topic by stating, "I am interested ...".

HuggingFace ▷ #cool-finds (3 messages):

Links mentioned:


HuggingFace ▷ #i-made-this (159 messages🔥🔥):

Links mentioned:


HuggingFace ▷ #reading-group (4 messages):


HuggingFace ▷ #computer-vision (3 messages):

Link mentioned: Add COCO evaluation metrics · Issue #111 · huggingface/evaluate: I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I&#39...


HuggingFace ▷ #NLP (3 messages):

Link mentioned: [RoBERTa-based] Add support for sdpa by hackyon · Pull Request #30510 · huggingface/transformers: What does this PR do? Adding support for SDPA (scaled dot product attention) for RoBERTa-based models. More context in #28005 and #28802. Before submitting This PR fixes a typo or improves the do...


HuggingFace ▷ #gradio-announcements (1 messages):


CUDA MODE ▷ #torch (3 messages):


CUDA MODE ▷ #torchao (11 messages🔥):

Links mentioned:


CUDA MODE ▷ #off-topic (3 messages):

Links mentioned:


CUDA MODE ▷ #hqq (4 messages):


CUDA MODE ▷ #llmdotc (402 messages🔥🔥):

Links mentioned:


CUDA MODE ▷ #rocm (1 messages):

iron_bound: https://chipsandcheese.com/2024/06/25/testing-amds-giant-mi300x/


CUDA MODE ▷ #bitnet (1 messages):

Link mentioned: pytorch/aten/src/ATen/native/native_functions.yaml at 18fdc0ae5b9e9e63eafe0b10ab3fc95c1560ae5c · pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration - pytorch/pytorch


Unsloth AI (Daniel Han) ▷ #general (131 messages🔥🔥):

Links mentioned:


Unsloth AI (Daniel Han) ▷ #help (251 messages🔥🔥):

- **Checkpoints and Finetuning**: "Use save checkpoints and continue finetuning from the checkpoints" is suggested, with a link to the [Unsloth wiki](https://github.com/unslothai/unsloth/wiki) for more detailed instructions.
- **Multi GPU Issues**: Users reported runtime errors when trying to run Unsloth on multi-GPU setups and discussed potential workarounds, including limiting CUDA devices and downgrading to a previous Unsloth version. A relevant link to [GitHub issue 660](https://github.com/unslothai/unsloth/issues/660) was shared.
- **Vision Models and OCR**: GPT4o's performance in OCR was discussed, with some users skeptical about LLAVA models achieving similar results. An alternative suggestion was [openedai-vision](https://github.com/matatonic/openedai-vision).
- **Experimentation with LLaMA Models**: Users shared difficulties and potential solutions when fine-tuning "unsloth/Phi-3-mini-4k-instruct" and other issues with datasets and training setups. A workaround involving model merging for better results on Hugging Face was suggested.
- **Training Statistics and Callbacks**: Discussion on how to track loss and other metrics during training, with recommendations for using wandb, TensorBoard, and custom callbacks in Hugging Face. A link to [TensorBoardCallback documentation](https://huggingface.co/docs/transformers/main_classes/callback#transformers.integrations.TensorBoardCallback) was provided.

Links mentioned:


Perplexity AI ▷ #general (182 messages🔥🔥):

<ul>
    <li><strong>Language Switching Bug Annoys Users</strong>: Multiple users reported a bug where the UI language on Perplexity randomly changed to languages other than English, despite settings indicating English. One user noted, "It says English, but it's in Spanish."</li>
    <li><strong>Pro Search Features Confuse Users</strong>: Users asked about differences between Pro Search and standard search, expressing confusion over features supposedly available to standard users as well. Another user wished for clarity, noting that new multi-step processes felt slower.</li>
    <li><strong>File Download Issues Plague PRO Users</strong>: A user reported problems with generating accessible download links for uploaded files, despite having a PRO subscription. The response indicated the lack of a "code interpreter" in Perplexity.</li>
    <li><strong>Perplexity Pro Functionality in Question</strong>: Users from Brazil faced issues with Perplexity fetching searches from localized sources, instead returning results primarily in English. One user from Argentina questioned if subscribing to the Pro plan would unlock the “Pages” feature.</li>
    <li><strong>API Summarization Fails to Impress</strong>: A user working with the Perplexity API noted that it failed to return citations and images. Another advised to ask Perplexity to create code blocks as a workaround for document generation.</li>
</ul>

Link mentioned: Jina Reranker v2 for Agentic RAG: Ultra-Fast, Multilingual, Function-Calling & Code Search: Jina Reranker v2 is the best-in-class reranker built for Agentic RAG. It features function-calling support, multilingual retrieval for over 100 languages, code search capabilities, and offers a 6x spe...


Perplexity AI ▷ #sharing (8 messages🔥):

Links mentioned:


Perplexity AI ▷ #pplx-api (2 messages):


LM Studio ▷ #💬-general (75 messages🔥🔥):

Links mentioned:


LM Studio ▷ #🤖-models-discussion-chat (26 messages🔥):

Links mentioned:


LM Studio ▷ #🧠-feedback (3 messages):


LM Studio ▷ #⚙-configs-discussion (7 messages):


LM Studio ▷ #🎛-hardware-discussion (1 messages):

uniartisan_86246: I would like to ask if I can set the CPU threads when I am a server


LM Studio ▷ #🧪-beta-releases-chat (5 messages):


LM Studio ▷ #autogen (1 messages):


LM Studio ▷ #open-interpreter (11 messages🔥):


LAION ▷ #general (81 messages🔥🔥):

Links mentioned:


LAION ▷ #research (27 messages🔥):

Links mentioned:


OpenAI ▷ #annnouncements (1 messages):


OpenAI ▷ #ai-discussions (79 messages🔥🔥):

Links mentioned:


OpenAI ▷ #gpt-4-discussions (17 messages🔥):


OpenAI ▷ #prompt-engineering (4 messages):


OpenAI ▷ #api-discussions (4 messages):


Stability.ai (Stable Diffusion) ▷ #general-chat (90 messages🔥🔥):

Link mentioned: Civitai Joins the Open Model Initiative | Civitai: Today, we’re excited to announce the launch of the Open Model Initiative, a new community-driven effort to promote the development and adoption of ...


Nous Research AI ▷ #off-topic (4 messages):

Links mentioned:


Nous Research AI ▷ #interesting-links (5 messages):

Links mentioned:


Nous Research AI ▷ #general (64 messages🔥🔥):

Links mentioned:


Nous Research AI ▷ #rag-dataset (5 messages):


OpenRouter (Alex Atallah) ▷ #announcements (3 messages):

Links mentioned:


OpenRouter (Alex Atallah) ▷ #app-showcase (1 messages):

Link mentioned: A Day in the Life of a Bounty Hunter | Elite: Dangerous AI Integration: 🌟 Project on Github: https://github.com/RatherRude/Elite-Dangerous-AI-Integration ( github.com/RatherRude/Elite-Dangerous-AI-Integration )💬 Join our ...


OpenRouter (Alex Atallah) ▷ #general (63 messages🔥🔥):

Links mentioned:


Latent Space ▷ #ai-general-chat (57 messages🔥🔥):

Links mentioned:


Latent Space ▷ #ai-announcements (4 messages):

Links mentioned:


LlamaIndex ▷ #blog (3 messages):

Link mentioned: AI Engineer World's Fair: Join 2,000 software engineers enhanced by and building with AI. June 25 - 27, 2024, San Francisco.


LlamaIndex ▷ #general (52 messages🔥):

Links mentioned:


Modular (Mojo 🔥) ▷ #general (9 messages🔥):

Link mentioned: mojo/docs/changelog.md at 1b79ef249f52163b0bafbd10c1925bfc81ea1cb3 · modularml/mojo: The Mojo Programming Language. Contribute to modularml/mojo development by creating an account on GitHub.


Modular (Mojo 🔥) ▷ #💬︱twitter (1 messages):

ModularBot: From Modular: https://twitter.com/Modular/status/1805642326129492195


Modular (Mojo 🔥) ▷ #✍︱blog (1 messages):

Link mentioned: Modular: What's New in MAX 24.4? MAX on MacOS, Fast Local Llama3, Native Quantization and GGUF Support: We are building a next-generation AI developer platform for the world. Check out our latest post: What's New in MAX 24.4? MAX on MacOS, Fast Local Llama3, Native Quantization and GGUF Support


Modular (Mojo 🔥) ▷ #ai (3 messages):

Link mentioned: Haystack | Haystack: Haystack, the composable open-source AI framework


Modular (Mojo 🔥) ▷ #🔥mojo (2 messages):


Modular (Mojo 🔥) ▷ #performance-and-benchmarks (24 messages🔥):


Modular (Mojo 🔥) ▷ #🏎engine (2 messages):


Modular (Mojo 🔥) ▷ #nightly (9 messages🔥):


Eleuther ▷ #general (21 messages🔥):

Links mentioned:


Eleuther ▷ #research (17 messages🔥):

Links mentioned:


Eleuther ▷ #scaling-laws (3 messages):

Link mentioned: Some AI Koans: no description found


Eleuther ▷ #interpretability-general (4 messages):

Links mentioned:


Eleuther ▷ #lm-thunderdome (5 messages):

Link mentioned: add arc_challenge_mt by jonabur · Pull Request #1900 · EleutherAI/lm-evaluation-harness: This PR adds tasks for machine-translated versions of arc challenge for 11 languages. We will also be adding more languages in the future.


Interconnects (Nathan Lambert) ▷ #news (7 messages):

<ul>
  <li><strong>Multi app joins OpenAI Family</strong>: <a href="https://multi.app/blog/multi-is-joining-openai">Multi's blog post</a> announced that the app will join OpenAI, exploring how to work with computers alongside AI. Active teams can use the app until July 24, 2024, after which all user data will be deleted.</li>
  <li><strong>Apple Dismisses AI Partnership with Meta</strong>: <a href="https://archive.is/uUv1L">Apple rejected Meta's proposal</a> to integrate the Llama AI chatbot into iPhones, opting instead for deals with OpenAI's ChatGPT and Alphabet's Gemini. Concerns over Meta's privacy practices contributed to Apple's decision.</li>
</ul>

Links mentioned:


Interconnects (Nathan Lambert) ▷ #ml-drama (5 messages):

Link mentioned: rabbit data breach: all r1 responses ever given can be downloaded - rabbitude: rabbit inc has known that we have had their elevenlabs (tts) api key for a month, but they have taken no action to rotate the api keys.


Interconnects (Nathan Lambert) ▷ #random (27 messages🔥):

Links mentioned:


Interconnects (Nathan Lambert) ▷ #memes (10 messages🔥):

Link mentioned: Tweet from Jordan Schneider (@jordanschnyc): The US government should be terrified about the current state of AI lab security. From our interview with @alexandr_wang releasing on ChinaTalk tomorrow, after I asked him what the US government shoul...


OpenInterpreter ▷ #general (36 messages🔥):

Link mentioned: Replete-AI/Replete-Coder-Llama3-8B · Hugging Face: no description found


OpenInterpreter ▷ #ai-content (1 messages):

m.0861: man ai videos just give me the creeps yalls


LangChain AI ▷ #general (32 messages🔥):

Links mentioned:


LangChain AI ▷ #share-your-work (3 messages):

Links mentioned:


LangChain AI ▷ #tutorials (1 messages):

Link mentioned: Do you even need an AI Framework or GPT-4o for your app?: So, you want to integrate AI into your product, right? Whoa there, not so fast!With models like GPT-4o, Gemini, Claude, Mistral, and others and frameworks li...


Cohere ▷ #general (30 messages🔥):

Links mentioned:


tinygrad (George Hotz) ▷ #learn-tinygrad (19 messages🔥):

Links mentioned:


OpenAccess AI Collective (axolotl) ▷ #general (9 messages🔥):

Link mentioned: Adam-mini: Use Fewer Learning Rates To Gain More: We propose Adam-mini, an optimizer that achieves on-par or better performance than AdamW with 45% to 50% less memory footprint. Adam-mini reduces memory by cutting down the number of learning rates in...


OpenAccess AI Collective (axolotl) ▷ #general-help (1 messages):


OpenAccess AI Collective (axolotl) ▷ #datasets (3 messages):


Torchtune ▷ #general (8 messages🔥):


LLM Finetuning (Hamel + Dan) ▷ #general (3 messages):

Link mentioned: Language models on the command-line: I gave a talk about accessing Large Language Models from the command-line last week as part of the Mastering LLMs: A Conference For Developers & Data Scientists six week long …


LLM Finetuning (Hamel + Dan) ▷ #learning-resources (2 messages):

Links mentioned:


LLM Finetuning (Hamel + Dan) ▷ #axolotl (1 messages):

raminparker: Very cool. Thx for the article!


LLM Finetuning (Hamel + Dan) ▷ #freddy-gradio (1 messages):


Mozilla AI ▷ #announcements (2 messages):


Mozilla AI ▷ #llamafile (4 messages):


AI Stack Devs (Yoko Li) ▷ #app-showcase (1 messages):

Link mentioned: Honeybot : no description found


AI Stack Devs (Yoko Li) ▷ #ai-companion (1 messages):

Link mentioned: Honeybot : no description found


AI Stack Devs (Yoko Li) ▷ #ai-town-discuss (1 messages):


MLOps @Chipro ▷ #events (1 messages):

Link mentioned: A Million Turing Tests per Second: Detecting bots and fraud in the time of LLMs · Luma: The Data Phoenix team invites you to our upcoming webinar, which will take place on June 27th at 10 a.m. PDT. Topic: A Million Turing Tests per Second:…






{% else %}

The full channel by channel breakdowns have been truncated for email.

If you want the full breakdown, please visit the web version of this email: [{{ email.subject }}]({{ email_url }})!

If you enjoyed AInews, please share with a friend! Thanks in advance!

{% endif %}