Frozen AI News archive

Mini, Nemo, Turbo, Lite - Smol models go brrr (GPT4o version)

**GPT-4o-mini** launches with a **99% price reduction** compared to text-davinci-003, offering **3.5% the price of GPT-4o** and matching Opus-level benchmarks. It supports **16k output tokens**, is faster than previous models, and will soon support **text, image, video, and audio inputs and outputs**. **Mistral Nemo**, a **12B parameter model** developed with **Nvidia**, features a **128k token context window**, FP8 checkpoint, and strong benchmark performance. **Together Lite and Turbo** offer fp8/int4 quantizations of **Llama 3** with up to **4x throughput** and significantly reduced costs. **DeepSeek V2** is now open-sourced. Upcoming releases include at least **5 unreleased models** and **Llama 4** leaks ahead of ICML 2024.

Canonical issue URL

AI News for 7/17/2024-7/18/2024. We checked 7 subreddits, 384 Twitters and 29 Discords (467 channels, and 2324 messages) for you. Estimated reading time saved (at 200wpm): 279 minutes. You can now tag @smol_ai for AINews discussions!

Like with public buses and startup ideas/asteroid apocalypse movies, many days you spend waiting for something to happen, and other days many things happen on the same day. This happens with puzzling quasi-astrological regularity in the ides of months - Feb 15, Apr 15, May 13, and now Jul 17:

As for why things like these bunch up - either Mercury is in retrograde, or ICML is happening next week, with many of these companies presenting/hiring, with Llama 3 400b expected to be released on the 23rd.


{% if medium == 'web' %}

Table of Contents

[TOC]

{% else %}

The Table of Contents and Channel Summaries have been moved to the web version of this email: [{{ email.subject }}]({{ email_url }})!

{% endif %}


AI Twitter Recap

all recaps done by Claude 3.5 Sonnet, best of 4 runs.

AI Models and Architectures

Open Source and Closed Source Debate

AI Agents and Frameworks

Prompting Techniques and Data

Memes and Humor


AI Reddit Recap

/r/LocalLlama Recap

Theme 1. EU Regulations Limiting AI Model Availability

Theme 2. Advancements in LLM Quantization Techniques

Theme 3. Comparative Analysis of LLMs for Specific Tasks

Theme 4. Innovative AI Education and Development Platforms

All AI Reddit Recap

r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity

Theme 1. AI in Comic and Art Creation

Theme 2. Real-Time AI Video Generation with Kling AI

Theme 3. OpenAI's Sora Video Generation Model

Theme 4. AI Regulation and Deployment Challenges


AI Discord Recap

As we do on frontier model release days, there are two versions of today's Discord summaries. You are reading the one where channel summaries are generated by GPT-4o, then the channel summaries are rolled up in to {4o/mini/sonnet/opus} summaries of summaries. See archives for the GPT-4o-mini pairing for your own channel-by-channel summary comparison.

Claude 3 Sonnet

1. New AI Model Launches and Capabilities

2. Advancements in Large Language Model Techniques

3. Hardware Optimization and AI Performance

4. AI Coding Assistants and Integrations

Claude 3.5 Sonnet

1. AI Model Launches and Upgrades

2. Open-Source AI Advancements

3. AI Safety and Ethical Challenges

Claude 3 Opus

1. Mistral NeMo 12B Model Release

2. GPT-4o Mini Launch & Jailbreak

3. Advances in AI Training & Deployment

GPT4O (gpt-4o-2024-05-13)

1. Mistral NeMo Release

2. AI Hardware Optimization

3. Multimodal AI Advancements

4. Model Training Issues

GPT4OMini (gpt-4o-mini-2024-07-18)

1. Mistral NeMo Model Launch

2. GPT-4o Mini Release

3. Deep Learning Hardware Optimization

4. RAG Implementation Challenges

5. Multimodal AI Advancements


PART 1: High level Discord summaries

Unsloth AI (Daniel Han) Discord


HuggingFace Discord


CUDA MODE Discord


Stability.ai (Stable Diffusion) Discord


Eleuther Discord


LM Studio Discord


Nous Research AI Discord


Latent Space Discord


OpenAI Discord


Interconnects (Nathan Lambert) Discord


OpenRouter (Alex Atallah) Discord


Modular (Mojo 🔥) Discord


Cohere Discord


Perplexity AI Discord


LangChain AI Discord


LlamaIndex Discord


OpenInterpreter Discord


OpenAccess AI Collective (axolotl) Discord


LLM Finetuning (Hamel + Dan) Discord


LAION Discord


Torchtune Discord


tinygrad (George Hotz) Discord


PART 2: Detailed by-Channel summaries and links

{% if medium == 'web' %}

Unsloth AI (Daniel Han) ▷ #general (245 messages🔥🔥):

  • RAG
  • Mistral NeMo release
  • Unsloth Studio
  • Mistral-Nemo integration
  • Flash Attention support

Links mentioned:


Unsloth AI (Daniel Han) ▷ #off-topic (11 messages🔥):

  • GPU recommendations
  • Runpod
  • Binary message
  • Shylily fans

Unsloth AI (Daniel Han) ▷ #help (84 messages🔥🔥):

  • disabling pad_token
  • finetuning and saving models
  • model sizes and memory consumption
  • fine-tuning locally
  • handling new errors with GPU and dtype

Links mentioned:


Unsloth AI (Daniel Han) ▷ #research (7 messages):

  • Synthesis of Topic Outlines through Retrieval and Multi-perspective Question Asking (STORM)
  • EfficientQAT for LLM Quantization
  • Memory3 Architecture for LLMs
  • Spectra LLM Suite and Quantization
  • Patch-Level Training for LLMs

Links mentioned:


HuggingFace ▷ #announcements (1 messages):

  • Watermark Remover
  • CandyLLM
  • AI Comic Factory Update
  • Fast Subtitle Maker
  • HF Text Embedding on Intel GPUs

Link mentioned: How to transition to Machine Learning from any field? | Artificial Intelligence ft. @vizuara: In this video, Dr. Raj Dandekar from Vizuara shares his experience of transitioning from mechanical engineering to Machine Learning (ML). He also explains be...


HuggingFace ▷ #general (222 messages🔥🔥):

  • Huggingchat Performance Issues
  • Model Training Queries
  • RVC and Voice Model Alternatives
  • Text2text Generation Issues
  • Admin Ping Etiquette

Links mentioned:


HuggingFace ▷ #today-im-learning (1 messages):

rp0101: https://youtu.be/N0eYoJC6USE?si=zms6lSsZkF6_vL0E


HuggingFace ▷ #cool-finds (7 messages):

  • Transformers.js tutorial
  • Computer Vision Course
  • AutoTrainer
  • Mistral NeMo
  • Discord moderation

Links mentioned:


HuggingFace ▷ #i-made-this (23 messages🔥):

  • AI Comic Factory update
  • Tool for transcribing and summarizing videos
  • Feedback request for AI assistant
  • Gradio Python library
  • Watermark remover using Florence 2 and Lama Cleaner

Links mentioned:


HuggingFace ▷ #reading-group (5 messages):

  • Delayed Project Presentation
  • Beginner Friendly Papers
  • Optimization of ML Model Layers

HuggingFace ▷ #computer-vision (1 messages):

dorbit_: Hey! Does anybody had the an experience with camera calibration with Transformers?


HuggingFace ▷ #NLP (5 messages):

  • Image to Video Diffusion Model
  • Prompt Engineering for SVD
  • Installing Transformers & Accelerate
  • Text Classification with Multiple Tags
  • YOLO Model Confusion

Link mentioned: stabilityai/stable-video-diffusion-img2vid-xt · Hugging Face: no description found


CUDA MODE ▷ #general (6 messages):

  • CUDA kernel splitting
  • Loss masking in LLM training
  • Sebastian Raschka's research insights
  • NVIDIA open-source kernel modules
  • CUDA graphs

Links mentioned:


CUDA MODE ▷ #triton (1 messages):

  • Missing tl.pow``
  • triton.language.extra.libdevice.pow()

CUDA MODE ▷ #torch (37 messages🔥):

  • Profiling and Distributed Cases in CUDA
  • Dynamic Shared Memory in CUDA
  • Issues with torch.compile
  • Installing torch-tensorrt
  • Replacing aten::embedding_dense_backward with Triton Kernels

Links mentioned:


CUDA MODE ▷ #algorithms (1 messages):

  • Google Gemma 2 family of models
  • Together AI Flash Attention 3
  • QGaLoRE: Quantised low rank gradients for fine tuning
  • Mistral AI MathΣtral and CodeStral mamba

Link mentioned: AIUnplugged 15: Gemma 2, Flash Attention 3, QGaLoRE, MathΣtral and Codestral Mamba: Insights over Information


CUDA MODE ▷ #beginner (6 messages):

  • CUTLASS repo tutorials
  • Nsight CLI resources

Link mentioned: cutlass/examples/cute/tutorial/sgemm_1.cu at main · NVIDIA/cutlass: CUDA Templates for Linear Algebra Subroutines. Contribute to NVIDIA/cutlass development by creating an account on GitHub.


CUDA MODE ▷ #torchao (2 messages):

  • HF related discussions
  • FSDP2 replacing FSDP

CUDA MODE ▷ #triton-puzzles (2 messages):

  • Triton compiler details
  • Triton puzzles solutions

Links mentioned:


CUDA MODE ▷ #llmdotc (159 messages🔥🔥):

  • FP8 training settings
  • Layernorm optimizations
  • GPT-3 models
  • Memory management refactoring
  • FP8 inference

Links mentioned:


CUDA MODE ▷ #lecture-qa (9 messages🔥):

  • Deep Copy in GPU Operations
  • Kernel Parameter Limits
  • Pointer Handling in CUDA
  • Quantization and Group Size

Stability.ai (Stable Diffusion) ▷ #general-chat (213 messages🔥🔥):

  • Hermes 2
  • Mistral struggles
  • Model Merging
  • Open Empathic

Links mentioned:


Eleuther ▷ #announcements (1 messages):

  • GoldFinch hybrid model
  • Linear Attention vs Transformers
  • GoldFinch performance benchmarks
  • Finch-C2 and GPTAlpha releases

Links mentioned:


Eleuther ▷ #general (72 messages🔥🔥):

  • Drama over AI scraping
  • Whisper Mit License Misunderstanding
  • Google scraping and content use
  • Random pages mentioning Pile
  • Community project involvement

Eleuther ▷ #research (108 messages🔥🔥):

  • ICML 2024
  • Attention Mechanisms
  • Protein Language Models
  • Patch-Level Training
  • Language Model Scaling

Links mentioned:


Eleuther ▷ #interpretability-general (1 messages):

  • tokenization-free models
  • interpretability in AI

Eleuther ▷ #lm-thunderdome (14 messages🔥):

  • lm-eval-harness --predict_only flag
  • TRL finetuning with lora
  • Embedding matrices issue in PeftModelForCausalLM
  • Gigachat model PR review
  • simple_evaluate responses storage

Link mentioned: Add Gigachat model by seldereyy · Pull Request #1996 · EleutherAI/lm-evaluation-harness: Add a new model to the library using the API with chat templates. For authorization set environmental variables "GIGACHAT_CREDENTIALS" and "GIGACHAT_SCOPE" for your API auth_data a...


LM Studio ▷ #💬-general (59 messages🔥🔥):

  • Codestral Mamba on LM Studio
  • Context length issues in LM Studio
  • Model suggestions for NSFW/roleplay
  • Gemma IT GPU issues
  • Mistral-Nemo 12B collaboration with NVIDIA

Links mentioned:


LM Studio ▷ #🤖-models-discussion-chat (23 messages🔥):

  • DeepSeek-V2 integration
  • Mistral NeMo model release
  • China's use of open-source LLMs
  • Logical reasoning in LLMs
  • Verbose responses in LLMs

Links mentioned:


LM Studio ▷ #🧠-feedback (1 messages):

xoxo3331: There is no argument or flag to load a model with a preset through cli


LM Studio ▷ #📝-prompts-discussion-chat (1 messages):

  • Meta Llama 3
  • Prompt strategies
  • Stock trading strategies
  • Fund allocation
  • Risk management

LM Studio ▷ #⚙-configs-discussion (1 messages):

  • New Model Discussion
  • LMStudio Preset for Autogen
  • Llama-3-Groq-8B-Tool-Use-GGUF

LM Studio ▷ #🎛-hardware-discussion (23 messages🔥):

  • Custom Hardware Specs
  • Resizable BAR Impact on LLM
  • NVIDIA GTX 1050 Issues
  • ROCM Version Update
  • DIY Safety Concerns

LM Studio ▷ #🧪-beta-releases-chat (3 messages):

  • 0.3.0 Beta Enrollment
  • Beta Download
  • Beta Announcements

LM Studio ▷ #amd-rocm-tech-preview (4 messages):

  • CUDA on AMD
  • zluda
  • scale
  • portable install option

Link mentioned: Reddit - Dive into anything: no description found


LM Studio ▷ #model-announcements (1 messages):

  • Groq's tool use models
  • Berkeley Function Calling Leaderboard
  • Llama-3 Groq-8B
  • Llama-3 Groq-70B
  • tool use and function calling

LM Studio ▷ #🛠-dev-chat (14 messages🔥):

  • Hosting AI models online
  • Ngrok vs Nginx for hosting
  • Custom web UI and SSR technique
  • Tailscale for secure tunneling
  • Building user accounts and separate chats

Nous Research AI ▷ #research-papers (2 messages):

  • TextGrad
  • ProTeGi
  • STORM Writing System

Links mentioned:


Nous Research AI ▷ #datasets (1 messages):

  • Synthetic dataset
  • General knowledge base

Link mentioned: GitHub - Mill-Pond-Research/AI-Knowledge-Base: Comprehensive Generalized Knowledge Base for AI Systems (RAG): Comprehensive Generalized Knowledge Base for AI Systems (RAG) - Mill-Pond-Research/AI-Knowledge-Base


Nous Research AI ▷ #interesting-links (3 messages):

  • Intelligent Digital Agents
  • Mistral-NeMo-12B-Instruct
  • AgentInstruct for Synthetic Data

Links mentioned:


Nous Research AI ▷ #general (115 messages🔥🔥):

  • Twitter/X Model Livestream
  • New Paper on LLM Jailbreaking
  • Mistral NeMo Model Release
  • AutoFP8 and FP8 Quantization
  • GPT-4o Mini Benchmark Performance

Links mentioned:


Nous Research AI ▷ #world-sim (6 messages):

  • WorldSim downtime
  • WorldSim issue

Latent Space ▷ #ai-general-chat (121 messages🔥🔥):

  • DeepSeek V2
  • GPT-5 Speculation
  • GPT-4o Mini Release
  • DeepSeek V2 Discussion
  • New LLaMA 3

Links mentioned:


Latent Space ▷ #ai-announcements (1 messages):

  • Model drop day
  • Updated thread discussions

OpenAI ▷ #annnouncements (1 messages):

  • GPT-4o mini launch

OpenAI ▷ #ai-discussions (66 messages🔥🔥):

  • Voice extraction model from Eleven Labs
  • Switching from GPT to Claude
  • Nvidia installer bundling with Facebook, Instagram, and Meta's Twitter
  • GPT-4o mini rollout and differences from GPT-4o
  • Issues with ChatGPT loading and troubleshooting steps

Link mentioned: Gollum Lord GIF - Gollum Lord Of - Discover & Share GIFs: Click to view the GIF


OpenAI ▷ #gpt-4-discussions (15 messages🔥):

  • GPTs Agents
  • OpenAI API errors
  • 4o mini token limits
  • OpenAI image token count
  • 4o mini vs 4o capabilities

OpenAI ▷ #prompt-engineering (20 messages🔥):

  • IF...THEN... logic in prompts
  • GPT-4 hallucinations
  • EWAC command framework
  • Voice agent with controlled pauses
  • Prompt engineering tips

OpenAI ▷ #api-discussions (20 messages🔥):

  • ChatGPT hallucination management
  • EWAC discussion framework
  • Voice agent pause control
  • Innovative prompting techniques
  • Thought evoking in AI responses

Interconnects (Nathan Lambert) ▷ #events (1 messages):

natolambert: Anyone at ICML? A vc friend of mine wants to meet my friends at a fancy dinner


Interconnects (Nathan Lambert) ▷ #news (74 messages🔥🔥):

  • Meta's multimodal Llama model
  • Mistral NeMo release
  • GPT-4o mini release
  • Tekken tokenizer
  • OpenAI safety mechanism jailbreak

Links mentioned:


Interconnects (Nathan Lambert) ▷ #ml-questions (5 messages):

  • Code-related PRM datasets
  • AST mutation method
  • Positive, Negative, Neutral labels vs Scalar values
  • PRM-800K
  • Research & MS program

Interconnects (Nathan Lambert) ▷ #ml-drama (21 messages🔥):

  • Public Perception of AI
  • OpenAI's Business Challenges
  • Google vs OpenAI Competition
  • AI Scaling Issues

Link mentioned: Tweet from TDM (e/λ) (@cto_junior): Every cool thing is later pretty sure we'll get Gemini-2.0 before all of this which anyways supports all modalities


Interconnects (Nathan Lambert) ▷ #random (9 messages🔥):

  • Codestral Mamba model
  • DeepSeek-V2-0628 release
  • Mamba infinite context
  • Open-sourced models
  • LMSYS Chatbot Arena

Links mentioned:


OpenRouter (Alex Atallah) ▷ #announcements (1 messages):

  • GPT-4o Mini
  • Cost-efficiency of GPT-4o Mini

Links mentioned:


OpenRouter (Alex Atallah) ▷ #general (97 messages🔥🔥):

  • Codestral 22B
  • OpenRouter outages
  • Mistral NeMo release
  • GPT-4o mini release
  • Image token pricing issues

Links mentioned:


Modular (Mojo 🔥) ▷ #general (7 messages):

  • Linking to C libraries request
  • Mojo GPU support
  • Max platform NVIDIA GPU announcement
  • MLIR dialects and CUDA/NVIDIA

Link mentioned: Issues · modularml/mojo: The Mojo Programming Language. Contribute to modularml/mojo development by creating an account on GitHub.


Modular (Mojo 🔥) ▷ #💬︱twitter (1 messages):

ModularBot: From Modular: https://twitter.com/Modular/status/1813988940405493914


Modular (Mojo 🔥) ▷ #ai (7 messages):

  • Image object detection in video
  • Frame rate adjustment
  • Handling bounding box issues
  • Processing MP4 videos
  • Managing large video frames

Modular (Mojo 🔥) ▷ #mojo (35 messages🔥):

  • Loop through Tuple in Mojo
  • Naming conventions in Mojo
  • Keras 3.0 compatibility and advancement
  • MAX and HPC capabilities
  • Alias Tuples of FloatLiterals in Mojo

Links mentioned:


Modular (Mojo 🔥) ▷ #max (5 messages):

  • Command-Line Prompt Usage
  • Model Weight URIs
  • Llama 3 Pipeline
  • Interactive Chatbot example

Links mentioned:


Modular (Mojo 🔥) ▷ #nightly (13 messages🔥):

  • Mojo Compiler Update
  • Standard Library Extensions Proposal
  • Discussion on Allocator Awareness
  • Async IO API and Performance
  • Opt-out of stdlib

Links mentioned:


Modular (Mojo 🔥) ▷ #mojo-marathons (16 messages🔥):

  • Lubeck
  • MKL
  • LLVM
  • BLAS Linking
  • SPIRAL

Links mentioned:


Cohere ▷ #general (52 messages🔥):

  • Creating new tools for API
  • Tools vs Connectors
  • Permissions for sending images and GIFs
  • DuckDuckGo search in projects

Links mentioned:


Cohere ▷ #project-sharing (31 messages🔥):

  • Python development for scraping
  • Library for collecting URLs
  • Firecrawl self-hosting
  • Cost concerns of Firecrawl
  • API integration with GPT-4o

Links mentioned:


Perplexity AI ▷ #general (63 messages🔥🔥):

  • Google Sheets login issue
  • Perplexity analyzing multiple PDFs
  • GPT-4 vs. GPT-4 Omni answers
  • Perplexity Pro email from Logitech
  • DALL-E update speculation

Links mentioned:


Perplexity AI ▷ #sharing (5 messages):

  • Record-Breaking Stegosaurus Sale
  • Lab-Grown Pet Food Approved
  • Anthropic's $100M AI Fund
  • H2O-3 Code Execution Vulnerability

Link mentioned: YouTube: no description found


Perplexity AI ▷ #pplx-api (5 messages):

  • NextCloud Perplexity API setup
  • Model selection issues
  • API call suggestions
  • Formatting responses in API queries

Link mentioned: Supported Models: no description found


LangChain AI ▷ #general (39 messages🔥):

  • Openrouter integration with LangChain
  • Code-based RAG examples for Q&A chatbot
  • Using trimMessages with Llama2 model
  • Setting beta header for Claude in LangChain
  • MongoDB hybrid search with LangChain

Links mentioned:


LangChain AI ▷ #langserve (2 messages):

  • Langserve Debugger Container
  • Langserve Container

Links mentioned:


LangChain AI ▷ #langchain-templates (1 messages):

  • ChatPromptTemplate JSON Issue
  • GitHub support for LangChain

Link mentioned: Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.


LangChain AI ▷ #share-your-work (1 messages):

  • Easy Folders launch
  • Product Hunt
  • Superuser membership
  • Productivity tools
  • Browser extensions

Link mentioned: Easy Folders for ChatGPT & Claude - Declutter and organize your chat history | Product Hunt: Create Folders, Search Chat History, Bookmark Chats, Prompts Manager, Prompts Library, Custom Instruction Profiles, and more.


LangChain AI ▷ #tutorials (1 messages):

  • LangGraph
  • Corrective RAG
  • RAG Fusion Python Project
  • Chatbot hallucinations

Link mentioned: LangGraph + Corrective RAG + RAG Fusion Python Project: Easy AI/Chat for your Docs: #chatbot #coding #ai #llm #chatgpt #python #In this video, I have a super quick tutorial for you showing how to create a fully local chatbot with LangGraph, ...


LlamaIndex ▷ #blog (4 messages):

  • Jerry Liu's Keynote at AI World's Fair
  • RAGapp New Features
  • StackPodcast Interview with Jerry Liu
  • New Model Releases from MistralAI and OpenAI

Links mentioned:


LlamaIndex ▷ #general (21 messages🔥):

  • Neo4jPropertyGraphStore indexing
  • Starting with Llama Index
  • Setting min outputs in LLMMultiSelector
  • RAG evaluation frameworks
  • OpenAI data masking

Links mentioned:


LlamaIndex ▷ #ai-discussion (2 messages):

  • Query rewriting
  • Multimodal RAG using GPT4o and Sonnet3.5
  • LlamaIndex performance
  • Langchain and RAG app development
  • Document splitting in LlamaIndex

Link mentioned: llama_parse/examples/multimodal/claude_parse.ipynb at main · run-llama/llama_parse: Parse files for optimal RAG. Contribute to run-llama/llama_parse development by creating an account on GitHub.


OpenInterpreter ▷ #general (19 messages🔥):

  • OpenInterpreter Hits 10,000 Members
  • Affordable AI outperforming GPT-4
  • Fast Multimodal AI Agents

OpenAccess AI Collective (axolotl) ▷ #general (9 messages🔥):

  • High context length challenges
  • Mistral NeMo release
  • Mistral NeMo performance comparison
  • Training inference capabilities in transformers

Links mentioned:


OpenAccess AI Collective (axolotl) ▷ #general-help (7 messages):

  • Overfitting in GEM-A
  • LLama3 Model
  • Impact of Lowering Rank on Eval Loss
  • Training Loss Observations

LLM Finetuning (Hamel + Dan) ▷ #general (7 messages):

  • Comparative Performance of LLMs
  • Hugging Face Model Latency on Mac M1
  • Data Sensitivity with GPT Models

LLM Finetuning (Hamel + Dan) ▷ #jarvis-labs (1 messages):

ashpun: i dont think there is an expiration date. do we have <@657253582088699918> ?


LAION ▷ #general (2 messages):

  • Meta's multimodal AI models
  • Llama models not available for EU users

LAION ▷ #research (6 messages):

  • Codestral Mamba
  • Prover-Verifier Games
  • NuminaMath-7B
  • Mistral NeMo

Links mentioned:


Torchtune ▷ #dev (6 messages):

  • Custom template formatting
  • CI behavior in PRs
  • Instruction dataset issues

tinygrad (George Hotz) ▷ #learn-tinygrad (3 messages):

  • GTX1080 compatibility with tinygrad
  • CUDA support for older NVIDIA cards







{% else %}

The full channel by channel breakdowns have been truncated for email.

If you want the full breakdown, please visit the web version of this email: [{{ email.subject }}]({{ email_url }})!

If you enjoyed AInews, please share with a friend! Thanks in advance!

{% endif %}