Frozen AI News archive

not much happened this weekend

**o3** model gains significant attention with discussions around its capabilities and implications, including an OpenAI board member referencing "AGI." **LangChain** released their **State of AI 2024** survey. **Hume** announced **OCTAVE**, a **3B parameter** API-only speech-language model with voice cloning. **x.ai** secured a **$6B Series C** funding round. Discussions highlight **inference-time scaling**, **model ensembles**, and the surprising generalization ability of **small models**. New tools and datasets include **FineMath**, the best open math dataset on Hugging Face, and frameworks for LLM agents. Industry updates cover a **5-month benchmarking** of **AMD MI300X** vs **Nvidia H100 + H200**, insights from a meeting with **Lisa Su** on AMD's software stack, and open AI engineering roles. Research innovations include **Large Concept Models (LCM)** from Meta AI, **Chain of Continuous Thought (Coconut)** for latent space reasoning, and mechanistic interpretability initiatives.

Canonical issue URL

AI News for 12/20/2024-12/23/2024. We checked 7 subreddits, 433 Twitters and 32 Discords (215 channels, and 8402 messages) for you. Estimated reading time saved (at 200wpm): 958 minutes. You can now tag @smol_ai for AINews discussions!

Lots to ponder over. We are recapping 2024 over at Latent.space, so far covering:


{% if medium == 'web' %}

Table of Contents

[TOC]

{% else %}

The Table of Contents and Channel Summaries have been moved to the web version of this email: [{{ email.subject }}]({{ email_url }})!

{% endif %}


AI Twitter Recap

all recaps done by Claude 3.5 Sonnet, best of 4 runs.

AI Model Performance and Scaling

AI Development Tools, Frameworks & Datasets

Industry News & Company Updates

AI Research and Innovation

Policy, Ethics, and Societal Impact

Memes/Humor

Memes/Humor


AI Reddit Recap

/r/LocalLlama Recap

Theme 1. Gemini 2.0 adds multimodal capabilities in January

Theme 2. Phi-4 release delays and unofficial versions

Theme 3. Advancements in Llama-3_1-Nemotron-51B and GGUF quantization tools

Theme 4. Tokenization challenges in LLM: Deeper analysis than expected

Theme 5. MI300X vs H100 vs H200 GPU benchmark shows AMD potential

Other AI Subreddit Recap

/r/Singularity, /r/Oobabooga, /r/MachineLearning, /r/OpenAI, /r/ClaudeAI, /r/StableDiffusion, /r/ChatGPT

Theme 1. Veo 2's AI Short Films: A New Cinematic Era

Theme 2. Evaluating O1 Pro: User Perspectives and Competitor Analysis


AI Discord Recap

A summary of Summaries of Summaries by o1-preview-2024-09-12

Theme 1: OpenAI's O3 Model Sparks Heated Debates

Theme 2: AI Coding Assistants Under Fire for Performance Issues

Theme 3: Fine-Tuning and Quantization Techniques Gain Traction

Theme 4: Ethics and Uncensoring in AI Models

Theme 5: Medical AI Models Make Significant Strides

o1-2024-12-17

Theme 1. Major Editor & Tool Upgrades

Theme 2. AI Model Announcements & Performance

Theme 3. Fine-Tuning & LLM Benchmarks

Theme 4. GPU & HPC Showdowns

Theme 5. Innovative Applications & Prompting


PART 1: High level Discord summaries

Codeium (Windsurf) Discord


Cursor IDE Discord


OpenAI Discord


aider (Paul Gauthier) Discord


Nous Research AI Discord


Interconnects (Nathan Lambert) Discord


Stackblitz (Bolt.new) Discord


Unsloth AI (Daniel Han) Discord


Stability.ai (Stable Diffusion) Discord


OpenRouter (Alex Atallah) Discord


LM Studio Discord


Modular (Mojo 🔥) Discord


Notebook LM Discord Discord


Eleuther Discord


Perplexity AI Discord


GPU MODE Discord


Nomic.ai (GPT4All) Discord


Latent Space Discord


Cohere Discord


LlamaIndex Discord


tinygrad (George Hotz) Discord


DSPy Discord


OpenInterpreter Discord


Torchtune Discord


LAION Discord


LLM Agents (Berkeley MOOC) Discord


Axolotl AI Discord


The MLOps @Chipro Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The Mozilla AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The HuggingFace Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The Gorilla LLM (Berkeley Function Calling) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


PART 2: Detailed by-Channel summaries and links

{% if medium == 'web' %}

Codeium (Windsurf) ▷ #announcements (1 messages):

Windsurf 1.1.1 release, Usage Transparency and Pricing, Cascade Image Uploads, Improved Python Support

Link mentioned: Windsurf Editor Changelogs | Windsurf Editor and Codeium extensions: Latest updates and changes for the Windsurf Editor.


Codeium (Windsurf) ▷ #content (1 messages):

Send to Cascade feature

Link mentioned: Tweet from Windsurf (@windsurf_ai): Send your problems straight to Cascade!


Codeium (Windsurf) ▷ #discussion (175 messages🔥🔥):

Windsurf Performance Issues, Windsurf Subscription Queries, AI Project Development, User Experience with Codeium, Holiday Promotions and Support

Links mentioned:


Codeium (Windsurf) ▷ #windsurf (504 messages🔥🔥🔥):

Windsurf Performance Issues, Flow Action Limits, User Login Problems, Model Comparisons, Support and Feedback

Links mentioned:


Cursor IDE ▷ #general (903 messages🔥🔥🔥):

Cursor IDE, AI coding assistance, O1 and Sonnet models, React development challenges, Using AI for web development

Links mentioned:


OpenAI ▷ #annnouncements (1 messages):

Sora Bonus for ChatGPT Plus, Sora access for Teams users, Blend feature upgrade, Shared links for Sora creations

Link mentioned: no title found: no description found


OpenAI ▷ #ai-discussions (717 messages🔥🔥🔥):

Gemini 2.0 Performance, AI vs Human Perception, Llama 3.3 vs Gemini Debate, Robotics and AI, Philosophy of AGI

Links mentioned:


OpenAI ▷ #gpt-4-discussions (28 messages🔥):

O3 Release and Pricing, GPT-4o Subscription Limits, Token Limits Explained, Data Extraction Testing, ChatGPT Usage Feedback


OpenAI ▷ #prompt-engineering (24 messages🔥):

Spectrum Theory and Spectrum Prompting, Behavior modeling with Sora, Dietary planning application, Prompt library discussion, Memory personalization in ChatGPT


OpenAI ▷ #api-discussions (24 messages🔥):

Spectrum Prompting Techniques, Sora Input Methods, Dietary Application Iterations, Recipe Creation Complexity, Nutritional Accuracy in GPT Models


aider (Paul Gauthier) ▷ #announcements (1 messages):

Aider's new polyglot benchmark, o1 model performance, Coding exercise challenges

Link mentioned: o1 tops aider’s new polyglot leaderboard: o1 scores the top result on aider’s new multi-language, more challenging coding benchmark.


aider (Paul Gauthier) ▷ #general (812 messages🔥🔥🔥):

Aider usage with O1 Pro, Gemini model performance comparisons, Benchmark results, Code editing with LLMs, API access and rate limits

Links mentioned:


aider (Paul Gauthier) ▷ #questions-and-tips (45 messages🔥):

gemini-exp-1206 configurations, GitHub Copilot integration, repo maps for various languages, using different LLMs in Aider, polyglot benchmark results

Links mentioned:


aider (Paul Gauthier) ▷ #links (11 messages🔥):

Depth AI evaluation, Model Context Protocol, GritQL query engine, Code generation and maintenance challenges

Links mentioned:


Nous Research AI ▷ #general (460 messages🔥🔥🔥):

Phi-4 Model Performance, Quantization Methods, Local vs Cloud Model Running, Reasoning Capabilities of Models, Mean Generation Speeds

Links mentioned:


Nous Research AI ▷ #ask-about-llms (9 messages🔥):

Instruction Tuning on Raw Text, Training BERT for Classification, KV Cache Architectures, Qwen 32 vs Hermes 70B


Nous Research AI ▷ #research-papers (2 messages):

Medical LLMs, Depth Completion with GANs, Clinical Trust in AI, Multimodal Medical Models, Ethics in Medical AI

Link mentioned: Tweet from Open Life Science AI (@OpenlifesciAI): 🌟 Weekly Medical AI Research Roundup 🌟📅 December 15-21, 2024Here's your weekly digest of the most important medical AI papers! 🎉🤖 Medical LLM & Other Models- MedMax: Mixed-Modal Biomedical As...


Nous Research AI ▷ #research-papers (2 messages):

Medical LLMs, Frameworks for Clinical AI, Depth Completion Techniques, Medical Ethics in AI

Link mentioned: Tweet from Open Life Science AI (@OpenlifesciAI): 🌟 Weekly Medical AI Research Roundup 🌟📅 December 15-21, 2024Here's your weekly digest of the most important medical AI papers! 🎉🤖 Medical LLM & Other Models- MedMax: Mixed-Modal Biomedical As...


Nous Research AI ▷ #reasoning-tasks (1 messages):

Reasoning dataset creation, Collaborative dataset project, Use of <think> tag, Model targeting, Research and study


Interconnects (Nathan Lambert) ▷ #news (262 messages🔥🔥):

OpenAI O3, GPT-5 delays, ARC-AGI performance, AI job market, Evaluating reasoning in AI

Links mentioned:


Interconnects (Nathan Lambert) ▷ #ml-questions (8 messages🔥):

Long Context Training, Llama Team Changes, Rohan Anil's Move, New Model Strategies, AGI Goals

Links mentioned:


Interconnects (Nathan Lambert) ▷ #ml-drama (2 messages):

Deleted Content, Legal Issues


Interconnects (Nathan Lambert) ▷ #random (55 messages🔥🔥):

O3 API enhancements, Gemini project updates, Sora access for users, ChatGPT lawyer engagements, Model evaluation discussions

Links mentioned:


Interconnects (Nathan Lambert) ▷ #memes (32 messages🔥):

OpenAI funding strategies, Riemann Question, GPT performance, Discord emojis, Memes collection channel

Links mentioned:


Interconnects (Nathan Lambert) ▷ #rl (16 messages🔥):

Deliberate Alignment Method, Tulu 3 Output Verification, LLM as Judge for Rewards, Reward Model Challenges


Interconnects (Nathan Lambert) ▷ #reads (12 messages🔥):

The Nvidia Way, Bio-maxxing literature, Asimov Press, Mindless listening while working, Reading group discussions

Links mentioned:


Interconnects (Nathan Lambert) ▷ #lectures-and-projects (35 messages🔥):

Training OLMo-2 13B, Model Fine-Tuning vs RAG, Trust in AI Models, Prompting Techniques, Open Models Discussion

Link mentioned: GitHub - axolotl-ai-cloud/axolotl: Go ahead and axolotl questions: Go ahead and axolotl questions. Contribute to axolotl-ai-cloud/axolotl development by creating an account on GitHub.


Interconnects (Nathan Lambert) ▷ #posts (20 messages🔥):

OpenAI's o3 Model, LLAMA 3.3 Launch, Reasoning AI Models, Anthropic Holiday Surprise, Subscription Pricing Update

Links mentioned:


Stackblitz (Bolt.new) ▷ #announcements (1 messages):

Mistletokens, Holiday gifts from Bolt, Free and Pro user benefits

Link mentioned: Tweet from StackBlitz (@stackblitz): Happy Holidays! Yet again our team put together a special gift for y'all:🎄 We call them, Mistletokens! 🎄Till EOY:🔔 All Pro users get 2M free tokens!🔔 All Free users get 200K daily & 2M monthly...


Stackblitz (Bolt.new) ▷ #prompting (15 messages🔥):

Bolt Studio Launch, DataCloneError Issue, Prompting Best Practices, AI Efficiency Studies, Token Usage Concerns


Stackblitz (Bolt.new) ▷ #discussions (402 messages🔥🔥):

Token Usage in Bolt, Integrating APIs with Bolt, CORS Issues, GitHub Integration with Bolt, Deploying Applications

Links mentioned:


Unsloth AI (Daniel Han) ▷ #general (332 messages🔥🔥):

Unsloth Features, Abliteration Techniques, Debugging Issues on Beam Cloud, LLM Training Feedback, Open Source Development Practices

Links mentioned:


Unsloth AI (Daniel Han) ▷ #help (66 messages🔥🔥):

Unsloth vs Ollama, Fine-tuning Llama 3.2, Using Google Colab vs Local, Semantic Search for Images and Text, Dataset Preparation for Training

Links mentioned:


Unsloth AI (Daniel Han) ▷ #research (3 messages):

MI300X vs H100 and H200, AMD's market position

Link mentioned: MI300X vs H100 vs H200 Benchmark Part 1: Training – CUDA Moat Still Alive: Intro SemiAnalysis has been on a five-month long quest to settle the reality of MI300X. In theory, the MI300X should be at a huge advantage over Nvidia’s H100 and H200 in terms of specifications an…


Stability.ai (Stable Diffusion) ▷ #general-chat (336 messages🔥🔥):

Utilizing LoRA and Inpainting in AI Image Generation, Comparison of SD 3.5 and SDXL Models, Discussion on AI and AGI, Experiences with Different AI WebUIs, Concerns over AI Scamming and Spam

Links mentioned:


OpenRouter (Alex Atallah) ▷ #announcements (1 messages):

Crypto Payments API, On-chain payments for LLMs, Funding agent intelligence

Link mentioned: Tweet from OpenRouter (@OpenRouterAI): Introducing the Crypto Payment API: the first way to script on-chain payments for any LLM 💸Want to make one of the first agents that can fund its own intelligence?Works with ETH, @0xPolygon, & @Base,...


OpenRouter (Alex Atallah) ▷ #app-showcase (3 messages):

Tool Calling Capabilities, Structured Outputs Playground, PKCE Authentication Key Storage

Links mentioned:


OpenRouter (Alex Atallah) ▷ #general (241 messages🔥🔥):

OpenRouter features, Model comparisons, API issues, User experiences, Model performance

Links mentioned:


LM Studio ▷ #general (143 messages🔥🔥):

Granite tokenizer issues, Budget GPUs for AI, RAG image processing, AVX2 CPU compatibility, Low-cost AI services

Links mentioned:


LM Studio ▷ #hardware-discussion (85 messages🔥🔥):

GPU Performance Comparisons, Cooling Solutions, Upcoming GPU Releases, Multi-GPU Setups, Inference Speed Observations


Modular (Mojo 🔥) ▷ #general (11 messages🔥):

Machine Setup Queries, Standard Library Bug Fix, Mojo in High-Frequency Trading


Modular (Mojo 🔥) ▷ #announcements (1 messages):

Happy Holidays Message, Modular Shutdown Notice, Feedback for 24.6 Release


Modular (Mojo 🔥) ▷ #mojo (109 messages🔥🔥):

Mojo atof Performance, NuMojo Bug Fix, GPU Support in Mojo, Mojo List and Span Behavior, NuMojo Testing Results

Links mentioned:


Modular (Mojo 🔥) ▷ #max (106 messages🔥🔥):

Mojo performance compared to JAX, Numpy API implementation for Mojo, Benefits of static vs dynamic compilation, Challenges with functional programming in JAX, Dead code elimination and optimization techniques

Links mentioned:


Notebook LM Discord ▷ #use-cases (48 messages🔥):

AI-generated videos, Customization of AI voices, NotebookLM podcast features, AI podcast app, Research on brain functions

Links mentioned:


Notebook LM Discord ▷ #general (179 messages🔥🔥):

Interactive Mode Issues, Podcast Features and Enhancements, User Experience Feedback, Content Sharing Solutions, Customization Options

Links mentioned:


Eleuther ▷ #general (23 messages🔥):

Attention Mechanism Patterns, Collaboration in Computer Vision, Natural Attention and Diffusion Models, 4-bit Quantization Technology

Links mentioned:


Eleuther ▷ #research (130 messages🔥🔥):

Optimizer Research Challenges, In-Context Learning in LLMs, Alignment Faking in LLMs, Training Dynamics and Generalization, Diffusion Models and Representation Learning

Links mentioned:


Eleuther ▷ #lm-thunderdome (69 messages🔥🔥):

ANTLR4 installation issues, Transformer library dependencies, Chat template configuration, Sympy version requirements

Links mentioned:


Perplexity AI ▷ #announcements (1 messages):

Perplexity 2024 recap, Trending searches, Regional question variations


Perplexity AI ▷ #general (203 messages🔥🔥):

Perplexity Pro issues, Support for languages, AI model usage, User experiences with AI, Encyclopedia creation

Links mentioned:


Perplexity AI ▷ #sharing (8 messages🔥):

AI directive maintenance, Magic Spell Hypothesis, Masked Singer Winner, Big Data Overview, Samsung's Project Moohan

Link mentioned: YouTube: no description found


Perplexity AI ▷ #pplx-api (4 messages):

Web Search Feature API, Tokenizer Issues with Llama 3.1, Credit Card Management in Account

Links mentioned:


GPU MODE ▷ #general (13 messages🔥):

Zero to ASIC Course, Magic ultra-long context models, thrust::device_vector and shared memory, Symbolic integers and floats in PyTorch, Job application experience at Magic

Links mentioned:


GPU MODE ▷ #triton (12 messages🔥):

FP64 Support in Triton, Testing Script for Triton, Padding Recommendations, Triton Build Process, Type Hints/Stubs in Triton

Link mentioned: triton/test/Analysis/test-allocation.mlir at main · triton-lang/triton: Development repository for the Triton language and compiler - triton-lang/triton


GPU MODE ▷ #cuda (13 messages🔥):

NVIDIA CUDA Documentation Issues, CUTLASS Producer-Consumer Structure, ArrayFire Community Adoption, Pricing on Lightning.ai vs. Bare Metal


GPU MODE ▷ #torch (8 messages🔥):

Attention Kernel Fusing, Profiling PyTorch Models, CUDA Memory Debugging

Link mentioned: Understanding CUDA Memory Usage — PyTorch 2.5 documentation: no description found


GPU MODE ▷ #announcements (1 messages):

CUDA Docs for Humans, GPU Glossary, Livestreaming Talks, Video Editing Lag, Community Engagement

Links mentioned:


GPU MODE ▷ #algorithms (1 messages):

Diffusion Models, NeurIPS 2024 Paper, Autoguidance

Link mentioned: Tweet from The Variational Book (@TheVariational): Curious about how diffusion models are influenced? @jaakkolehtinen @unixpickle @prafdhar @TimSalimans @hojonathanho Check out the review of the Autoguidance #NeurIPS2024 runner-up best paper in the ...


GPU MODE ▷ #cool-links (5 messages):

MI300X vs H100 vs H200 Benchmarking, Tensor Parallelism Implementation

Links mentioned:


GPU MODE ▷ #beginner (9 messages🔥):

CUDA Initialization, Learning Resources for GPUs

Link mentioned: GitHub - srush/GPU-Puzzles: Solve puzzles. Learn CUDA.: Solve puzzles. Learn CUDA. Contribute to srush/GPU-Puzzles development by creating an account on GitHub.


GPU MODE ▷ #youtube-recordings (1 messages):

gau.nernst: https://youtu.be/qmpGv72qPCE


GPU MODE ▷ #torchao (1 messages):

torchao optimization, model deployment options, autoquant usage, user-controlled options


GPU MODE ▷ #off-topic (3 messages):

Prompt Compression, Funny System Prompts, Dataset Exploration


GPU MODE ▷ #triton-puzzles (1 messages):

Pycario installation, Python.h error, Shell alternatives


GPU MODE ▷ #sparsity-pruning (1 messages):

PyTorch AO Sparsity, Sparsify API, to_sparse_semi_structured API, Inference Techniques

Link mentioned: ao/torchao/sparsity at main · pytorch/ao: PyTorch native quantization and sparsity for training and inference - pytorch/ao


GPU MODE ▷ #🍿 (2 messages):

Paper download issues, User experience with downloads


GPU MODE ▷ #arc-agi-2 (29 messages🔥):

OpenAI o3 model evaluation, Gemini Flash Thinking performance, RL strategies in model tuning, LLM compute costs, Self-correction in models

Links mentioned:


Nomic.ai (GPT4All) ▷ #general (88 messages🔥🔥):

GPT4All and local models, Mandelbrot fractal implementation, Granite LLM, Using TTS with GPT4All, Multiple user logins on Windows

Links mentioned:


Latent Space ▷ #ai-general-chat (64 messages🔥🔥):

OpenAI o3 model launch, FineMath dataset introduction, Anthropic's market position, OCTAVE speech-language model, Series C funding announcement by xai

Links mentioned:


Latent Space ▷ #ai-announcements (2 messages):

Vision Papers 2024, Open Models Growth in 2024, DETR Object Detection, Multimodal Model Gaps, Vision Language Models

Links mentioned:


Latent Space ▷ #ai-in-action-club (20 messages🔥):

API keys handling, Character AI audience insights, User experiences with character AI


Cohere ▷ #discussions (12 messages🔥):

CMD-R and Reasoning Skills, Command-R-08 vs GPT-4, AI Red Teaming Tools, Safety Benchmarks for AI, Command R+ Model Performance

Links mentioned:


Cohere ▷ #questions (41 messages🔥):

Cohere request time estimation, Testing tokens, Distribution graph for request time, Sharing results


Cohere ▷ #api-discussions (12 messages🔥):

Cohere Request Timing, TooManyRequestsError Issue, Batch Embed Job Limits


Cohere ▷ #cmd-r-bot (9 messages🔥):

System Message Structure, Markdown H2 Headers, Model Response Optimization


LlamaIndex ▷ #blog (4 messages):

Document processing workflows, Auto-insurance agentic workflow, Dynamic ArXiv research agent, SKU/Product catalog matching


LlamaIndex ▷ #general (29 messages🔥):

Building RAG pipelines, Recruiting for Web3 AI project, LlamaParser issues, LlamaIndex framework feedback, Chat store management


LlamaIndex ▷ #ai-discussion (4 messages):

LLM training with live data, Continuous training of LLMs, Automated training pipeline, Catastrophic forgetting


tinygrad (George Hotz) ▷ #general (13 messages🔥):

PR Guidelines for Readability, ShapeTracker Functionality, Bug Bounty Process, Meeting #50 Agenda

Link mentioned: How ShapeTracker works: Tutorials on tinygrad


tinygrad (George Hotz) ▷ #learn-tinygrad (13 messages🔥):

Tensor Indexing with Boolean Masks, Running Examples in Python, Loading Pretrained CLIP Model, VSCode Project Setup, Discord Rules and Etiquette


DSPy ▷ #general (16 messages🔥):

DSPy and compound AI systems, Optimization task running time, Local model recommendations for tool use

Links mentioned:


DSPy ▷ #colbert (5 messages):

ModernBERT introduction, ModernBERT capabilities, ColBERT integration

Link mentioned: Finally, a Replacement for BERT: Introducing ModernBERT: no description found


OpenInterpreter ▷ #general (16 messages🔥):

Local LLM Integration, LM Studio Tag vs Classic Mode, Access to 1.0 Documentation, Function Calling in 1.0, Proxy Setup with OI


Torchtune ▷ #announcements (2 messages):

Torchtune v0.5.0 Release, Community Hiring Announcement, Kaggle Integration, QAT + LoRA Training Recipe, NPU Support

Links mentioned:


Torchtune ▷ #general (1 messages):

Code State Dict Assumptions, Parameter Wrapping, Persistent Buffers in Models


Torchtune ▷ #dev (8 messages🔥):

NaN issue with KD code, Ray vs torch.distributed, Function-level parallelism with Ray

Link mentioned: NaN running official KD code on different dataset, with packing + compile · Issue #2198 · pytorch/torchtune: Hi, thanks for this great work! With official code, I get NaN, if I change to the different dataset. Could anyone help this? What's happening? I get NaN during the training (about after 3500~3600 ...


LAION ▷ #general (7 messages):

Uncensored GPT, Color spaces and human perception, JPEG/AV1 borrowing techniques, Variational Autoencoders


LAION ▷ #research (4 messages):

Test time cot and knowledge recombination, Impact on text-to-image generation, ZGI with o1 non-preview, Cost concerns


LLM Agents (Berkeley MOOC) ▷ #mooc-questions (9 messages🔥):

LangGraph recommendation, CrewAI community feedback, Berkeley credits for MOOC, YouTube discussion on lab topics, Certificate issuance timeline

Link mentioned: - YouTube: no description found


Axolotl AI ▷ #general (3 messages):

Liger DPO, KTO Development, Loss Parity Issues






{% else %}

The full channel by channel breakdowns have been truncated for email.

If you want the full breakdown, please visit the web version of this email: [{{ email.subject }}]({{ email_url }})!

If you enjoyed AInews, please share with a friend! Thanks in advance!

{% endif %}