Frozen AI News archive

Llama 3.1: The Synthetic Data Model

**Meta AI** has released **Llama 3.1**, including a **405B parameter model** that triggers regulatory considerations like the **EU AI Act** and **SB 1047**. The model incorporates extensive **synthetic data** techniques for **code**, **math**, **multilinguality**, **long context**, and **tool use** fine-tuning, with **RLHF** using synthetic preference data from **Llama 2**. The launch was coordinated across major inference providers, with **Groq** demonstrating **750 tokens per second** inference speed and **Fireworks** leading in pricing. The updated license explicitly allows synthetic data generation, marking a significant step in open frontier-class LLMs and cost-efficiency improvements since March.

Canonical issue URL

AI News for 7/22/2024-7/23/2024. We checked 7 subreddits, 384 Twitters and 30 Discords (474 channels, and 5128 messages) for you. Estimated reading time saved (at 200wpm): 473 minutes. You can now tag @smol_ai for AINews discussions!

Llama 3.1 is here! (Site, Video,Paper, Code, model, Zuck, Latent Space pod). Including the 405B model, which triggers both the EU AI act and SB 1047. The full paper has all the frontier model comparisons you want:

image.png

We'll assume you read the headlines from yesterday. It's not up on LMsys yet, but independent evals on SEAL and Allen AI's ZeroEval are promising (with some disagreement). It was a well coordinated launch across ~every inference provider in the industry, including (of course) Groq showing a flashy demo inferencing at 750tok/s. Inference pricing is also out with Fireworks leading the pack.

While it is well speculated that the 8B and 70B were "offline distillations" of the 405B, there are a good deal more synthetic data elements to Llama 3.1 than the expected. The paper explicitly calls out:

Last but not least, Llama 3.1 received a license update explicitly allowing its use for synthetic data generation.

We finally have a frontier-class open LLM, and together it is worth noting how far ahead the industry has moved in cost per intelligence since March, and it will only get better from here.

image.png


{% if medium == 'web' %}

Table of Contents

[TOC]

{% else %}

The Table of Contents and Channel Summaries have been moved to the web version of this email: [{{ email.subject }}]({{ email_url }})!

{% endif %}


AI Twitter Recap

all recaps done by Claude 3.5 Sonnet, best of 4 runs.

Meta AI

AI Assistants and Agents

Benchmarks and Evaluations

Frameworks and Tools


AI Reddit Recap

/r/LocalLlama Recap

Theme 1. Running Large Language Models Locally

Theme 2. LLaMA 3.1 405B Model Release and Benchmarks

Theme 3. Distributed and Federated AI Inference

Theme 4. New AI Model Releases and Leaks

All AI Reddit Recap

r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity

Theme 1. OpenAI's Universal Basic Income Experiment Results

Theme 4. AI Researcher Predictions on AGI Timeline

Theme 5. New AI Training Infrastructure Developments


AI Discord Recap

A summary of Summaries of Summaries

1. LLM Advancements and Benchmarking

2. Optimizing LLM Inference and Training

3. Open-Source AI Frameworks and Community Efforts

4. Multimodal AI and Generative Modeling Innovations


PART 1: High level Discord summaries

HuggingFace Discord


Nous Research AI Discord


LM Studio Discord


Perplexity AI Discord


Stability.ai (Stable Diffusion) Discord


OpenRouter (Alex Atallah) Discord


CUDA MODE Discord


OpenAI Discord


Modular (Mojo 🔥) Discord


Eleuther Discord


Interconnects (Nathan Lambert) Discord


OpenAccess AI Collective (axolotl) Discord


DSPy Discord


LlamaIndex Discord


Latent Space Discord


LangChain AI Discord


Cohere Discord


Torchtune Discord


tinygrad (George Hotz) Discord


LAION Discord


OpenInterpreter Discord


Alignment Lab AI Discord


LLM Finetuning (Hamel + Dan) Discord


AI Stack Devs (Yoko Li) Discord


Mozilla AI Discord


The LLM Perf Enthusiasts AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The MLOps @Chipro Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The DiscoResearch Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


PART 2: Detailed by-Channel summaries and links

{% if medium == 'web' %}

HuggingFace ▷ #announcements (1 messages):

  • NuminaMath datasets
  • Docmatix dataset
  • SmolLM models
  • Chameleon model
  • Followgraph tool

Links mentioned:


HuggingFace ▷ #general (1104 messages🔥🔥🔥):

  • Llama 3.1 release
  • Kanye West controversy
  • Building PC setups
  • Model fine-tuning practices
  • Textbook recommendations for LLMs

Links mentioned:


HuggingFace ▷ #today-im-learning (4 messages):

  • Speaker Diarization & Transcription
  • Sankey Plots Visualization
  • Dynamic Graph Node Management
  • PEFT Model Loading Methods
  • Adapter Configuration in Models

HuggingFace ▷ #cool-finds (5 messages):

  • Willing Suspension of Disbelief
  • nanoLLaVA model
  • Meta's Llama 3.1 release
  • Mark Zuckerberg's vision for open-source AI

Links mentioned:


HuggingFace ▷ #i-made-this (10 messages🔥):

  • UltraPixel high resolution images
  • Rust client library for Gradio
  • SmolLM Arena updates
  • YouTube Notes Generator
  • Mistral-NeMo 12B Instruct

Links mentioned:


HuggingFace ▷ #computer-vision (2 messages):

  • Anime-style dataset for Anything V5
  • Fine-tuning SD models

Link mentioned: stablediffusionapi/anything-v5 · Hugging Face: no description found


HuggingFace ▷ #NLP (17 messages🔥):

  • Non packed datasets with SFTTrainer
  • Error handling with tensor creation
  • Embedding model for numerical data
  • Using Donut for generation
  • Modifications in Transformers library

Links mentioned:


HuggingFace ▷ #diffusion-discussions (1 messages):

  • Background removal
  • Segmentation
  • Diffusion models

Nous Research AI ▷ #research-papers (2 messages):

  • Magpie Paper
  • Nous Research AI
  • Instruction Generation Techniques

Link mentioned: NNsight and NDIF: Democratizing Access to Foundation Model Internals: The enormous scale of state-of-the-art foundation models has limited their accessibility to scientists, because customized experiments at large model sizes require costly hardware and complex engineer...


Nous Research AI ▷ #off-topic (9 messages🔥):

  • ReFT paper discussion
  • YouTube video on ReFT
  • Oxen AI community activity
  • PC Agent Demo
  • Emoji duplication in server

Links mentioned:


Nous Research AI ▷ #interesting-links (62 messages🔥🔥):

  • Bud-E Voice Assistant
  • Llama 3.1 Models
  • Synthetic Dataset Creation
  • Graph RAG by Microsoft
  • DSPy Python Library

Links mentioned:


Nous Research AI ▷ #general (489 messages🔥🔥🔥):

  • Llama 3.1 Performance
  • Quantization and Fine-tuning
  • Tool Calling Methods
  • Model Inference and Evaluation
  • Open Source Licensing

Links mentioned:


Nous Research AI ▷ #ask-about-llms (18 messages🔥):

  • Training larger bitnet models
  • Differences in model fine-tuning
  • Fine-tuning Llama 3.0
  • Multilingual fine-tuning resources

Nous Research AI ▷ #rag-dataset (4 messages):

  • Kuzu Graph Database
  • GraphRAG and Outlines
  • Entity Deduplication Techniques
  • Property Graph Index
  • Duplicate Detection in Graph Databases

Link mentioned: Customizing Property Graph Index in LlamaIndex: Learn how to perform entity deduplication and custom retrieval methods using LlamaIndex to increase GraphRAG accuracy.


Nous Research AI ▷ #world-sim (1 messages):

jmiles38: <@414158939555364865> are you a contributor to worldsim/world client?


Nous Research AI ▷ #reasoning-tasks-master-list (74 messages🔥🔥):

  • Open Reasoning Tasks
  • Schema and Formatting Improvements
  • Reasoning Techniques and Tools
  • Master List for Reasoning Papers
  • SMT Solvers for Reasoning

Links mentioned:


LM Studio ▷ #💬-general (197 messages🔥🔥):

  • LM Studio performance
  • Model downloads issues
  • Linux compatibility with GPU
  • Llama 3.1 capabilities
  • ROCm installation

Links mentioned:


LM Studio ▷ #🤖-models-discussion-chat (92 messages🔥🔥):

  • High Memory Usage of Qwen 2
  • LM Studio Model Compatibility
  • Meta-Llama Model Recommendations
  • Advancements in Gemini and Deepseek
  • LLM Compiler for Advanced Coding

Links mentioned:


LM Studio ▷ #announcements (1 messages):

  • Search Functionality

LM Studio ▷ #🧠-feedback (4 messages):

  • hf-mirror.com
  • Latex support for Llama 3.1 models

LM Studio ▷ #⚙-configs-discussion (12 messages🔥):

  • Llama 3 Configuration
  • GPU Settings
  • Roleplay Scenarios
  • Context Length Settings

LM Studio ▷ #🎛-hardware-discussion (23 messages🔥):

  • Fine-tuning with 3090s
  • GGUF Fine-tuning Limitations
  • GPU Acceleration on RX 6700 XT
  • Quantized Model Fine-tuning
  • GPU Requirements for LLMs

Links mentioned:


LM Studio ▷ #🧪-beta-releases-chat (112 messages🔥🔥):

  • Beta UI Improvements
  • Feedback on Model Loading
  • User Experience Concerns
  • Issues with GPU Usage
  • Beta Testing Process

Links mentioned:


LM Studio ▷ #amd-rocm-tech-preview (5 messages):

  • ROCm 0.2.28 performance issues
  • Llama 3.1 compatibility with AMD cards

LM Studio ▷ #model-announcements (1 messages):

  • Llama 3.1
  • longer context improvements

LM Studio ▷ #🛠-dev-chat (4 messages):

  • Mistral download issues
  • VPN connectivity problems
  • LLM model for grading
  • CHROMA data usage

Perplexity AI ▷ #announcements (1 messages):

  • Llama 3.1 405B
  • Perplexity mobile apps

Perplexity AI ▷ #general (273 messages🔥🔥):

  • Performance of Llama 3.1 405B
  • Comparison of Llama 3.1 405B and Claude 3.5 Sonnet
  • Perplexity AI features and issues
  • Feedback on AI responses
  • API and usage experiences

Links mentioned:


Perplexity AI ▷ #sharing (12 messages🔥):

  • Dark Oxygen
  • Mercury's Diamonds
  • Beach-Cleaning Robots
  • Munger's Inversion Technique
  • Llama 3 Release

Link mentioned: YouTube: no description found


Perplexity AI ▷ #pplx-api (13 messages🔥):

  • Llama model updates
  • Perplexity API and DSGVO
  • Search site limitations

Stability.ai (Stable Diffusion) ▷ #general-chat (282 messages🔥🔥):

  • Stable Diffusion models comparison
  • Training Lycoris and Loras
  • Community perceptions of Stable Diffusion
  • New developments in AI models
  • General discussions and inquiries

Links mentioned:


OpenRouter (Alex Atallah) ▷ #announcements (41 messages🔥):

  • Llama 3 405B Launch
  • Model Performance Comparisons
  • OpenRouter Features Updates
  • Prompt Competition Announcement
  • DeepSeek Coder V2 Inference Provider

Links mentioned:


OpenRouter (Alex Atallah) ▷ #general (190 messages🔥🔥):

  • Llama 405B Model Performance
  • Custom API Keys Integration
  • Comparison of Llama Models
  • Prompting Competition for Llama 405B
  • Fine-Tuning and Instruction Challenges

Links mentioned:


CUDA MODE ▷ #general (7 messages):

  • Register Allocation in Flash Attention
  • Kernel Fusion of Q, K, V Projections
  • Challenges with SVD Parallelization
  • Open Source GPU Kernel Modules

Link mentioned: NVIDIA Transitions Fully Towards Open-Source GPU Kernel Modules | NVIDIA Technical Blog: With the R515 driver, NVIDIA released a set of Linux GPU kernel modules in May 2022 as open source with dual GPL and MIT licensing. The initial release targeted datacenter compute GPUs…


CUDA MODE ▷ #torch (9 messages🔥):

  • torch.compile performance
  • Bert model inference issues
  • CUDA graphs usage
  • PyTorch profiler tools
  • Inductor configuration changes

CUDA MODE ▷ #cool-links (17 messages🔥):

  • Meta Llama 3.1 Release
  • GPU Allocations
  • Multi-modal Features
  • VLM Capabilities
  • CUDA Performance

Links mentioned:


CUDA MODE ▷ #beginner (4 messages):

  • Performance of CUDA Kernels
  • Tiled Matrix Multiplication
  • Compute Intensity

CUDA MODE ▷ #hqq (1 messages):

iron_bound: neat https://github.com/AnswerDotAI/fsdp_qlora/tree/llama400b


CUDA MODE ▷ #llmdotc (182 messages🔥🔥):

  • Performance of LLMs
  • KV Caching Implementation
  • MuP vs Other Optimizations
  • Floating Point Precision Techniques
  • Training Stability Methods

Links mentioned:


CUDA MODE ▷ #rocm (6 messages):

  • Stable Diffusion on RX7900XTX
  • Flash Attention support for AMD ROCm

Links mentioned:


OpenAI ▷ #ai-discussions (196 messages🔥🔥):

  • GEMINI Competition
  • Meta AI
  • Llama 3.1 Model
  • Voice Channel AI Bots
  • Fine-Tuning Llama Models

OpenAI ▷ #gpt-4-discussions (7 messages):

  • Alpha Release Timing
  • User Communication Concerns
  • App Testing

OpenAI ▷ #prompt-engineering (7 messages):

  • Meta-Prompting
  • Plagiarism in AI Output
  • Prompting Techniques

OpenAI ▷ #api-discussions (7 messages):

  • Meta-Prompting
  • Plagiarism in Generated Content
  • Prompt Improvement Suggestions

Modular (Mojo 🔥) ▷ #general (39 messages🔥):

  • Mojo Community Meeting Presentations
  • String Optimization in Standard Library
  • Installing Mojo on VM
  • Game Engine Development in Mojo
  • Linking with C Libraries

Links mentioned:


Modular (Mojo 🔥) ▷ #💬︱twitter (1 messages):

ModularBot: From Modular: https://twitter.com/Modular/status/1815463417391837596


Modular (Mojo 🔥) ▷ #mojo (50 messages🔥):

  • SDL Bindings
  • Mojo Game Frameworking
  • Physics Engine Development
  • Contributing to Mojo
  • Pygame with Mojo

Links mentioned:


Modular (Mojo 🔥) ▷ #max (9 messages🔥):

  • Modular's Industry Relationships
  • NVIDIA Support
  • OpenCL and SYCL Usage

Modular (Mojo 🔥) ▷ #max-gpu (2 messages):

  • XLA
  • MAX engine
  • GPU performance

Modular (Mojo 🔥) ▷ #nightly (86 messages🔥🔥):

  • Changes to memcpy
  • Documentation for Mojo
  • Use of Reference in Mojo
  • Updates on Mojo Nightly
  • Relationship of MAX and Mojo

Links mentioned:


Modular (Mojo 🔥) ▷ #mojo-marathons (1 messages):

  • Intel CPUID Library
  • AMD CPUID Mappings

Eleuther ▷ #general (84 messages🔥🔥):

  • FSDP performance issues
  • Llama 3.1 hosting
  • Generative ML contributions

Links mentioned:


Eleuther ▷ #research (43 messages🔥):

  • New SAE architecture
  • Monte Carlo Dropout comparison
  • Hierarchical 3D Gaussians
  • Llama 3 model details
  • Transformer performance and sparsity

Links mentioned:


Eleuther ▷ #interpretability-general (1 messages):

alofty: https://arxiv.org/abs/2407.14561


Eleuther ▷ #lm-thunderdome (23 messages🔥):

  • Task Grouping Recommendations
  • lm-eval Harness Updates
  • vLLM and Logits Issues
  • Automated Unit Testing Discussions
  • Transformers Version Problems

Links mentioned:


Eleuther ▷ #gpt-neox-dev (5 messages):

  • Nerdsniping Evaluation
  • Uncheatable Evaluation Harness

Interconnects (Nathan Lambert) ▷ #news (69 messages🔥🔥):

  • Meta's AI Strategy
  • NVIDIA's Market Position
  • OpenAI Pricing Wars
  • Llama 3.1 Release

Links mentioned:


Interconnects (Nathan Lambert) ▷ #ml-questions (16 messages🔥):

  • Magpie paper on synthetic data generation
  • LLaMA 3 Instruct performance
  • Instruction finetuning techniques
  • Vocabulary size and inference speed

Links mentioned:


Interconnects (Nathan Lambert) ▷ #ml-drama (43 messages🔥):

  • Llama 3 Release
  • Mark Zuckerberg's AI Era
  • Model Watermarking Concerns
  • Public Perception of Zuckerberg

Links mentioned:


Interconnects (Nathan Lambert) ▷ #random (3 messages):

  • Claude AI boundaries
  • Sacred Texts in AI
  • Release of GPT-3.5 Opus

Interconnects (Nathan Lambert) ▷ #memes (7 messages):

  • OpenAI vs Llama 3.1
  • ChatGPT Memory Management
  • Mark Zuckerberg's AI Era
  • Snail Appreciation

Links mentioned:


Interconnects (Nathan Lambert) ▷ #nlp (3 messages):

  • Distillation
  • Llama 3.1

Interconnects (Nathan Lambert) ▷ #posts (4 messages):

  • SnailBot updates
  • User engagement timings

OpenAccess AI Collective (axolotl) ▷ #general (73 messages🔥🔥):

  • Llama 3.1 Release
  • Mistral and Nemo Concerns
  • Training Issues
  • Language Inclusion in Models
  • Evaluation Scores Comparison

Links mentioned:


OpenAccess AI Collective (axolotl) ▷ #axolotl-dev (33 messages🔥):

  • LLM Distillation
  • DPO Training Issues
  • Adapter Fine Tuning
  • Reward Modeling
  • ChiPO Algorithm

Links mentioned:


OpenAccess AI Collective (axolotl) ▷ #datasets (3 messages):

  • LLM for verb tense conversion
  • Spacy script for perspective change
  • Third-person to first-person conversion
  • Dataset for tense conversion examples

DSPy ▷ #show-and-tell (8 messages🔥):

  • Code Confluence Tool
  • DSPY Integration
  • Zenbase/Core Library Launch

DSPy ▷ #papers (2 messages):

  • AI Research Paper
  • Implementation Requests

DSPy ▷ #general (83 messages🔥🔥):

  • DSPy and Outlines comparison
  • Entity extraction with DSPy
  • Structured output issues with Llama3
  • Optimizer updates in DSPy
  • LOTUS integration with Phoenix

Links mentioned:


DSPy ▷ #colbert (3 messages):

  • ColPali Use Cases
  • ColBert and RAG
  • Qdrant Support for ColBert

Link mentioned: Hybrid Queries - Qdrant: Qdrant is an Open-Source Vector Database and Vector Search Engine written in Rust. It provides fast and scalable vector similarity search service with convenient API.


LlamaIndex ▷ #announcements (1 messages):

  • LlamaIndex Webinar
  • ColPali Document Retrieval
  • Vision Language Models
  • ViDoRe Benchmark

Link mentioned: LlamaIndex Webinar: ColPali - Efficient Document Retrieval with Vision Language Models · Zoom · Luma: Enterprise RAG systems face a significant challenge when processing PDFs with complex layouts, tables, and figures. Conventional RAG pipelines typically…


LlamaIndex ▷ #blog (8 messages🔥):

  • TiDB Future App Hackathon 2024
  • Mixture-of-Agents with LlamaIndex
  • Llama 3.1 Performance
  • LlamaParse Features
  • MongoDB AI Applications Program

Links mentioned:


LlamaIndex ▷ #general (61 messages🔥🔥):

  • context_window parameter
  • chunk_size and chunk_overlap
  • model availability and context size
  • ValueError in LlamaIndex
  • using models with larger context windows

Links mentioned:


Latent Space ▷ #ai-general-chat (53 messages🔥):

  • Llama 3.1 Release
  • IOL Linguistics Olympiad
  • Llama Pricing
  • Llama Performance Evaluations
  • GPT-4o Mini Fine-Tuning

Links mentioned:


Latent Space ▷ #ai-announcements (3 messages):

  • Llama 3 Podcast
  • Synthetic Data
  • RLHF
  • Galactica Instruct
  • Llama 4 Agents

Link mentioned: Tweet from Latent.Space (@latentspacepod): 🆕 pod with @ThomasScialom of @AIatMeta! Llama 2, 3 & 4: Synthetic Data, RLHF, Agents on the path to Open Source AGI https://latent.space/p/llama-3 shoutouts: - Why @ylecun's Galactica Instruct...


LangChain AI ▷ #general (23 messages🔥):

  • AgentState vs InnerAgentState
  • Using Chroma Vector Database
  • Multi-Character Chatbots in LangChain

Links mentioned:


LangChain AI ▷ #share-your-work (3 messages):

  • Scheduler Agent with Composio
  • LangGraph and MapReduce
  • Llama 3.1 Hosting

Links mentioned:


LangChain AI ▷ #tutorials (5 messages):

  • Scheduler Agent
  • YouTube Notes Generator
  • LangGraph and Flow Engineer
  • AI Code Reviewer
  • Fully Local Tool Calling with Ollama

Links mentioned:


Cohere ▷ #general (26 messages🔥):

  • Welcome New Members
  • Model Fine-tuning
  • Cohere's OCR Capabilities
  • RAG Chatbot Discussions
  • Community Feedback Evaluation

Cohere ▷ #announcements (1 messages):

  • Rerank 3 Nimble
  • Cohere and Fujitsu Partnership

Link mentioned: Introducing Rerank 3 Nimble: Faster Reranking for Enterprise Search & Retrieval-Augmented Generation (RAG) Systems: Today, Cohere is introducing Rerank 3 Nimble: the newest foundation model in our Cohere Rerank model series, built to enhance enterprise search and RAG systems, that is ~3x faster than Rerank 3 while ...


Torchtune ▷ #general (22 messages🔥):

  • Llama 3.1 release
  • MPS support and conflicts
  • Issues with LoRA
  • Git workflow challenges

Links mentioned:


Torchtune ▷ #dev (3 messages):

  • MPS support in Torchtune
  • Pad ID bug fix
  • GitHub Pull Request workflow

Links mentioned:


tinygrad (George Hotz) ▷ #general (15 messages🔥):

  • matmul-free-llm with tinygrad
  • M1 performance differences
  • Testing challenges with PYTHON=1``
  • cumsum optimization in tinygrad
  • TensorFlow vs PyTorch tensor operations

Links mentioned:


tinygrad (George Hotz) ▷ #learn-tinygrad (6 messages):

  • Incremental Testing in PyTorch
  • Molecular Dynamics Engine in Tinygrad
  • Gradient Calculations
  • Neural Network Potentials

LAION ▷ #general (9 messages🔥):

  • Int8 Usage
  • ComfyUI Flow
  • Llama 3.1 Release
  • Whisper Speech Tool
  • Zuckerberg's Talk on Llama 3.1

Links mentioned:


LAION ▷ #research (5 messages):

  • Meta's commitment to open source AI
  • Llama 3.1 capabilities
  • Context length improvements

Link mentioned: no title found: no description found


OpenInterpreter ▷ #general (1 messages):

  • Llama 3.1 405 B
  • GPT-4o performance

OpenInterpreter ▷ #O1 (3 messages):

  • Voice Input with Coqui Model
  • Expo App for Apple Watch
  • Device Shipping Timeline

Alignment Lab AI ▷ #general-chat (1 messages):

spirit_from_germany: https://youtu.be/Vy3OkbtUa5k?si=mBhzPQqDLgzDEL61


Alignment Lab AI ▷ #open-orca-community-chat (2 messages):

  • OpenOrca dataset licensing
  • Synthetic dataset announcement

LLM Finetuning (Hamel + Dan) ▷ #east-coast-usa (2 messages):

  • Miami meetup
  • NYC interest in August

AI Stack Devs (Yoko Li) ▷ #team-up (1 messages):

ari991963: Hi all, I am Aria a 2D/3D artist, if you are interested to collaborate dm


Mozilla AI ▷ #announcements (1 messages):

  • Mozilla Accelerator Application Deadline
  • Zero Shot Tokenizer Transfer Event
  • AutoFix Project Overview




{% else %}

The full channel by channel breakdowns have been truncated for email.

If you want the full breakdown, please visit the web version of this email: [{{ email.subject }}]({{ email_url }})!

If you enjoyed AInews, please share with a friend! Thanks in advance!

{% endif %}