Frozen AI News archive

SciCode: HumanEval gets a STEM PhD upgrade

**PhD-level benchmarks** highlight the difficulty of coding scientific problems for LLMs, with **GPT-4** and **Claude 3.5 Sonnet** scoring under 5% on the new **SciCode** benchmark. **Anthropic** doubled the max output token limit for Claude 3.5 Sonnet to 8192 tokens. The **Q-GaLore** method enables training **LLaMA-7B** on a single 16GB GPU. The **Mosaic compiler** now generates efficient code for NVIDIA H100 GPUs. The **Dolphin 2.9.3-Yi-1.5-34B-32k-GGUF** model on Hugging Face has over 111k downloads. **Llama 3** shows strong performance, achieving 90% zero-shot accuracy on the MATH dataset. Discussions continue on the limitations and forms of synthetic data for model training.

Canonical issue URL

AI News for 7/15/2024-7/16/2024. We checked 7 subreddits, 384 Twitters and 29 Discords (466 channels, and 2228 messages) for you. Estimated reading time saved (at 200wpm): 248 minutes. You can now tag @smol_ai for AINews discussions!

Lots of small updates here and there - HuggingFace's SmolLM replicated MobileLLM (our coverage just a week ago), Yi Tay wrote up the Death of BERT (our podcast 2 weeks ago), and 1 square block of San Francisco raised/sold for well over $30m in deals across Exa, SFCompute, and Brev (congrats friends!).

However our technical highlight of today is SciCode, which challenges LMs to code solutions for scientific problems from advanced papers. The challenges were crafted by PhDs (~10% is based on Nobel-winning research) and the two leading LLMs, GPT-4 and Sonnet 3.5, score <5% on this new benchmark.

image.png

Other than HumanEval and MBPP, the next claim to a top coding benchmark has been SWEBench (more info on our coverage, but it is expensive to run and more so an integration test of agentic systems rather than test of pure coding ability/world knowledge. SciCode provides a nice extension of the very popular HumanEval approach that is easy/cheap to run, and nevertheless still is remarkably difficult for SOTA LLMs, providing a nice gradient to run.

Nothing lasts forever (SOTA SWEbench went from 2% to 40% in 6 months) but new and immediately applicable benchmark work is very nice when done well.


{% if medium == 'web' %}

Table of Contents

[TOC]

{% else %}

The Table of Contents and Channel Summaries have been moved to the web version of this email: [{{ email.subject }}]({{ email_url }})!

{% endif %}


AI Twitter Recap

all recaps done by Claude 3.5 Sonnet, best of 4 runs.

AI Model Developments

AI Model Performance and Benchmarking

AI Safety and Regulation

AI Applications and Demos

Memes and Humor


AI Reddit Recap

Across r/LocalLlama, r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity. Comment crawling works now but has lots to improve!

Theme 1. New Frontiers

Theme 2. Advanced Stable Diffusion Techniques for Detailed Image Generation

Theme 3. Fine-tuning Llama 3 with Unsloth and Ollama


AI Discord Recap

A summary of Summaries of Summaries

1. Mamba Models Make Waves

2. Efficient LLM Architectures Evolve

3. AI Education and Benchmarking Breakthroughs


PART 1: High level Discord summaries

Unsloth AI (Daniel Han) Discord


Modular (Mojo 🔥) Discord


HuggingFace Discord


Stability.ai (Stable Diffusion) Discord


CUDA MODE Discord


OpenRouter (Alex Atallah) Discord


Perplexity AI Discord


Interconnects (Nathan Lambert) Discord


Eleuther Discord


Latent Space Discord


LM Studio Discord


Nous Research AI Discord


OpenAI Discord


LlamaIndex Discord


Cohere Discord


LangChain AI Discord


OpenAccess AI Collective (axolotl) Discord


Torchtune Discord


LAION Discord


OpenInterpreter Discord


LLM Finetuning (Hamel + Dan) Discord


tinygrad (George Hotz) Discord


Mozilla AI Discord


The Alignment Lab AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The LLM Perf Enthusiasts AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The AI Stack Devs (Yoko Li) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The MLOps @Chipro Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The DiscoResearch Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


PART 2: Detailed by-Channel summaries and links

{% if medium == 'web' %}

Unsloth AI (Daniel Han) ▷ #general (235 messages🔥🔥):

  • NVIDIA shutdown
  • RAG performance and optimization
  • Costs of fine-tuning large models
  • Codestral Mamba and Mathstral releases
  • Unsloth Pro license concerns

Links mentioned:


Unsloth AI (Daniel Han) ▷ #off-topic (27 messages🔥):

  • DCLM Baseline
  • Model performance
  • RTX 4090 vs 3060
  • Eureka Labs AI
  • New releases from Mistral

Links mentioned:


Unsloth AI (Daniel Han) ▷ #help (132 messages🔥🔥):

  • Mimicking Pretraining
  • Fine-Tuning LLMs on Domain-Specific PDFs
  • RunPod Training Issues
  • Multi-GPU Support
  • Exporting Models and Inference Methods

Links mentioned:


Unsloth AI (Daniel Han) ▷ #research (17 messages🔥):

  • LLaMA-405B
  • Q-Sparse
  • ColPali
  • AgentInstruct
  • Adam-mini

Links mentioned:


Modular (Mojo 🔥) ▷ #general (32 messages🔥):

  • FlatBuffers vs Protobuf
  • AMD logo color discussion
  • Mojo GitHub search issues
  • Open-source status of MAX SDK
  • YouTube links to Mojo talks

Links mentioned:


Modular (Mojo 🔥) ▷ #💬︱twitter (1 messages):

ModularBot: From Modular: https://twitter.com/Modular/status/1812972838707687889


Modular (Mojo 🔥) ▷ #📺︱youtube (1 messages):

  • MAX Graph API
  • AI inference pipeline
  • Mojo

Link mentioned: MAX Graph API Tutorial: The MAX Graph API allows you to build your entire AI inference pipeline in Mojo. In this video Ehsan M. Kermani discusses how you can get started with MAX Gr...


Modular (Mojo 🔥) ▷ #ai (8 messages🔥):

  • Mojo with local Whisper
  • Mistral 7B coding model
  • Mamba models
  • GUI example for Mistral 7B
  • ONNX conversion

Links mentioned:


Modular (Mojo 🔥) ▷ #mojo (154 messages🔥🔥):

  • Error Handling in Mojo
  • Python Compatibility Mode
  • Discussion on Function Coloring
  • Dynamic vs Auto Typed Variables
  • GPU and FPGA Error Handling

Links mentioned:


Modular (Mojo 🔥) ▷ #max (34 messages🔥):

  • Modular Exclusive Partnership
  • NVIDIA MAX Platform Support
  • MAX Graph API Tutorial Issues
  • MAX Tensor Imports
  • Reliability of MAX Installations

Links mentioned:


Modular (Mojo 🔥) ▷ #nightly (53 messages🔥):

  • VSCode nightly extension for LSP
  • Proposal for statuses on PRs
  • ComplexSIMD vector implementation
  • Handling reviews and discussions in PRs

Link mentioned: GitHub - refined-github/refined-github: :octocat: Browser extension that simplifies the GitHub interface and adds useful features: :octocat: Browser extension that simplifies the GitHub interface and adds useful features - refined-github/refined-github


Modular (Mojo 🔥) ▷ #mojo-marathons (1 messages):

ModularBot: Congrats <@585884735134236685>, you just advanced to level 1!


HuggingFace ▷ #announcements (1 messages):

  • AI Math Olympiad Winner Open Source
  • Whisper Timestamped Released
  • Nvidia BigVGAN v2
  • Hugging Face and Keras NLP Integration
  • Hugging Face Tokens UI Overhaul

Links mentioned:


HuggingFace ▷ #general (235 messages🔥🔥):

  • Troubleshooting issues with Hugging Face Spaces
  • GPTs agents' learning capabilities
  • Handling tokenization for unknown words in LLMs
  • Merging techniques for specialized agents
  • Validating models for 3D mesh object similarity

Links mentioned:


HuggingFace ▷ #today-im-learning (2 messages):

  • K-Means Clustering Video
  • UDOP Paper Discussion

Link mentioned: K-Means Clustering ( ML pt 5 ): In this video, I will talk about K - Means Clustering k-MC . It's going to be a friendly, short introduction, just like all the other videos in the playlist,...


HuggingFace ▷ #cool-finds (8 messages🔥):

  • Happy Dog Detection
  • Retrieval Tutorials
  • Online Censorship Impact
  • Mistral AI Models
  • Llama3 405b

Links mentioned:


HuggingFace ▷ #i-made-this (1 messages):

  • NLP Roadmap
  • NLP Projects Repository
  • NLP Historical Overview
  • NLP TOC

Links mentioned:


HuggingFace ▷ #reading-group (1 messages):

  • Best LLM for course-specific AI model
  • Video transcription tools
  • Fine-tuning on low-end hardware

HuggingFace ▷ #computer-vision (1 messages):

  • Skin Cancer Detection
  • 3D Images
  • Kaggle Competitions

HuggingFace ▷ #NLP (9 messages🔥):

  • NLP basic to advance course recommendations
  • Google Colab and GPU issues
  • Image embeddings and potential bias
  • Sentence transformers: multiple negatives vs. single negative
  • Vector distribution in Faiss index

Links mentioned:


HuggingFace ▷ #gradio-announcements (1 messages):

  • ViteJS usage in Gradio
  • ViteConf partnership
  • Gradio's custom component dev mode

Links mentioned:


Stability.ai (Stable Diffusion) ▷ #general-chat (243 messages🔥🔥):

  • AI Morph and NSFW content
  • Utilizing Stable Diffusion for Anime Style
  • YouTube Tutorial Recommendations
  • Local AI Tool Development
  • GPU Comparisons for AI Models

Links mentioned:


CUDA MODE ▷ #general (6 messages):

  • CUDA Kernel Invocation in Python Scripts
  • Performance Comparison in CUDA and PyTorch Mat Mul Implementations
  • Torch Profiler Usage

Link mentioned: Lecture 3: Getting Started With CUDA for Python Programmers: Recording on Jeremy's YouTube https://www.youtube.com/watch?v=nOxKexn3iBoSupplementary Content: https://github.com/cuda-mode/lecture2/tree/main/lecture3Speak...


CUDA MODE ▷ #torch (25 messages🔥):

  • GPU Performance Issues
  • PyTorch Profiler Export Times
  • Custom Kernels and Thunder Compiler

CUDA MODE ▷ #cool-links (1 messages):

  • SCALE GPGPU programming toolkit
  • Compiling CUDA for AMD GPUs
  • SCALE support for more GPU vendors
  • SCALE tutorial and examples

Link mentioned: SCALE documentation: no description found


CUDA MODE ▷ #jobs (9 messages🔥):

  • Suno hiring ML engineers
  • Suno looking for torch.compile and triton experts
  • Cutlass not required but encouraged
  • Suno hiring interns for ML roles

Link mentioned: Machine Learning Infrastructure Engineer: We’re looking for early members of our machine learning team. You’ll work closely with the founding team and have ownership of a wide variety of technical decisions on how we build and deploy our stat...


CUDA MODE ▷ #beginner (3 messages):

  • Lightning AI's Studios
  • Huggingface Spaces Dev Mode
  • CUDA development

Links mentioned:


CUDA MODE ▷ #jax (1 messages):

andreaskoepf: Anyone tried out Mosaic GPU yet? https://x.com/apaszke/status/1812897008031617493


CUDA MODE ▷ #torchao (3 messages):

  • unwrap_tensor_subclass in torch.compile
  • FakeTensors in model compilation

CUDA MODE ▷ #llmdotc (139 messages🔥🔥):

  • CUDA arguments renaming
  • Attention mechanisms
  • StableAdamW
  • AMD GPU support in llm.c
  • AI+Education company by Andrej Karpathy

Links mentioned:


CUDA MODE ▷ #bitnet (2 messages):

  • Sparsity
  • Quantized models

CUDA MODE ▷ #webgpu (1 messages):

iron_bound: Neat demos https://wgpu.rs/


OpenRouter (Alex Atallah) ▷ #announcements (1 messages):

  • Qwen 2 7B Instruct

Links mentioned:


OpenRouter (Alex Atallah) ▷ #general (130 messages🔥🔥):

  • Google Gemini Models
  • GPT-4o Free Tier
  • Gemini 1.5 Pro Performance
  • OpenRouter Issues
  • Llama 3 Extended Context Models

Links mentioned:


Perplexity AI ▷ #general (110 messages🔥🔥):

  • GPT Issues with Pasted Values
  • Model Settings for Different Collections
  • Perplexity Office
  • Gemini AI Details
  • Pro Subscription Support

Links mentioned:

plot this using plotly with additional upper...: Based on the instructions and search results, I'll provide a detailed explanation of how to plot the given function using Plotly, including upper and lower...f(t) = 3 * e^(-t) * sin(22pit) from t = 0 to 5 - Wolfram|Alpha: Wolfram|Alpha brings expert-level knowledge and capabilities to the broadest possible range of people—spanning all professions and education levels.


Perplexity AI ▷ #sharing (3 messages):

  • Alphabet $23B Deal
  • 7-Eleven's Upgrade
  • New Zealand's Rare Whale Discovery
  • Accessible Lunar Cave
  • Perplexity AI Pro Features

Links mentioned:


Perplexity AI ▷ #pplx-api (5 messages):

  • Removing sources in pplx-api
  • 524 errors with sonar models
  • Stream mode functionality

Interconnects (Nathan Lambert) ▷ #news (17 messages🔥):

  • Codestral Mamba
  • Mathstral
  • SmolLM
  • Eureka Labs
  • Hydra Model Extension

Links mentioned:


Interconnects (Nathan Lambert) ▷ #ml-questions (3 messages):

  • Circumventing Torrent Laws
  • Evaluation Gating in AI
  • Private Test Sets and Benchmark Filtering

Interconnects (Nathan Lambert) ▷ #ml-drama (3 messages):

  • Lobbying for Legislative Bills
  • Conflict of Interest
  • Profit from Compliance Checks
  • Ethics of Political Donations

Link mentioned: Tweet from Matt Popovich (@mpopv): feels like if you are heavily lobbying for, and soliciting donations to lobby for, a certain legislative bill, you should probably disclose that you secretly own a company positioned to profit from th...


Interconnects (Nathan Lambert) ▷ #random (31 messages🔥):

  • State of LLM evaluations
  • Hypothetical uses of open-source GPT4o-class model
  • AI training data sources
  • Cost of model training
  • New SciCode benchmark

Links mentioned:


Interconnects (Nathan Lambert) ▷ #nlp (6 messages):

  • MSFT Papers
  • WizardLM Vibes

Interconnects (Nathan Lambert) ▷ #rlhf (2 messages):

  • Degenerate Case in Policy Reinforcement
  • DPO-like algorithms

Interconnects (Nathan Lambert) ▷ #reads (21 messages🔥):

  • Qwen Team
  • RewardBench Performance
  • Foundational Large Autorater Models
  • Post-training for AI models
  • DeepMind's New Paper on FLAMe

Links mentioned:


Eleuther ▷ #general (24 messages🔥):

  • GPT-4 training on GSM8k
  • Instruction tuning datasets
  • GPU failures during training
  • Object counting use case
  • Pile 2 dataset size

Eleuther ▷ #research (41 messages🔥):

  • In-context learning with SSMs
  • EM-LLM for infinite context handling
  • FLUTE for faster LLM inference
  • Q-Sparse for efficient sparse LLMs
  • Observational studies on transformer layers

Links mentioned:


Eleuther ▷ #scaling-laws (5 messages):

  • Human vs Animal Intelligence
  • Neural Activity and Growth
  • Gender Differences in Neuron Counts

Eleuther ▷ #interpretability-general (1 messages):

  • Mirror Neurons
  • Feature Representation
  • Circuit Reuse
  • Neurological Theories

Eleuther ▷ #lm-thunderdome (7 messages):

  • Tokenization in MLX & HF models
  • Chat template application in tokenization
  • Top-level options in lm-eval

Eleuther ▷ #gpt-neox-dev (1 messages):

  • Dynamic Evaluation on LLMs
  • EleutherAI cool results
  • Continual Learning
  • Meta-Learning

Link mentioned: ‘dynamic evaluation (NN)’ tag · Gwern.net: no description found


Latent Space ▷ #ai-general-chat (77 messages🔥🔥):

  • Anthropic's PR Strategy
  • Token Limits Impact
  • Evaluation Gating
  • Claude Engineer 2.0
  • Qwen2 Technical Report

Links mentioned:


Latent Space ▷ #ai-in-action-club (1 messages):

  • XState
  • LangGraph
  • LLM Agents

Link mentioned: GitHub - statelyai/agent: Create state-machine-powered LLM agents using XState: Create state-machine-powered LLM agents using XState - statelyai/agent


LM Studio ▷ #💬-general (27 messages🔥):

  • LM Studio Android app access
  • Graphical bug in LM Studio
  • Cloud-based LM Studio
  • Error with llama.cpp in LM Studio
  • H2O.ai Danube3 Model Issue

Link mentioned: h2oai/h2o-danube3-4b-chat-GGUF · Hugging Face: no description found


LM Studio ▷ #🤖-models-discussion-chat (28 messages🔥):

  • Hermes 2
  • Mistral issues
  • Model Merging
  • Open Empathic

Links mentioned:


LM Studio ▷ #🧠-feedback (18 messages🔥):

  • LMS Model Loading Speed Issue
  • Gemma 2 Support
  • Phi 3 Small Support
  • Llama.cpp Limitations

LM Studio ▷ #🎛-hardware-discussion (1 messages):

magiikorb: and M3 ultra isn't even there


LM Studio ▷ #model-announcements (1 messages):

  • Mistrals Mathstral Release
  • Community Models Program
  • Mathstral Performance
  • GGUF Quantization
  • LM Studio Discord Engagement

Link mentioned: lmstudio-community/mathstral-7B-v0.1-GGUF · Hugging Face: no description found


Nous Research AI ▷ #research-papers (5 messages):

  • Evol-Instruct V2
  • Auto Evol-Instruct
  • Q-Sparse
  • BitNet b1.58

Links mentioned:


Nous Research AI ▷ #interesting-links (3 messages):

  • How AI Really Works
  • SpreadsheetLLM
  • Synth Data

Links mentioned:


Nous Research AI ▷ #general (58 messages🔥🔥):

  • Heat Wave Discussions
  • White Roof Paint Innovation
  • Deepseek Coder Comparison
  • Hackathon on FP8
  • Urban Heat Island Effect

Links mentioned:


Nous Research AI ▷ #ask-about-llms (6 messages):

  • Tokenization Issues with Arabic Symbols
  • Tools for Generating PPO/DPO Datasets
  • Invertibility of Tokenization

Nous Research AI ▷ #world-sim (1 messages):

wolfybl: Hi


OpenAI ▷ #ai-discussions (37 messages🔥):

  • GPTs Agents
  • Sora release speculation
  • GPT mini in Lymsys
  • OpenAI Platform
  • AI Programming

OpenAI ▷ #gpt-4-discussions (11 messages🔥):

  • Using GPT-4 to code a mobile game
  • Learning coding with GPT-4
  • Challenges using GPT for development
  • Creating a customer support chatbot
  • Adjusting GPT's response tone

OpenAI ▷ #prompt-engineering (5 messages):

  • Different languages affecting model performance
  • Prompting in native language vs English
  • Model's handling of regional slang and idioms

OpenAI ▷ #api-discussions (5 messages):

  • Language model performance across different languages
  • Prompting in different languages
  • Language preferences for model responses
  • Regional slang, idioms, and colloquialisms in GPT models

LlamaIndex ▷ #announcements (1 messages):

  • LlamaIndex Webinar
  • RAG Improvement
  • Deasie Automated Labeling
  • LlamaParse Tool

Links mentioned:


LlamaIndex ▷ #blog (4 messages):

  • Document RAG
  • Graph Query Algorithm
  • LlamaIndex Webinar
  • Sonnet-3.5 Chart Understanding

LlamaIndex ▷ #general (47 messages🔥):

  • LLM Response Sources
  • Service Context and Indexing Models
  • Vector Datasets and Tools
  • Parallel Index Loading
  • PropertyGraphIndex Embeddings

Links mentioned:


LlamaIndex ▷ #ai-discussion (3 messages):

  • llamaindex property graph vs microsoft graphrag
  • Graph rag functionalities
  • Property graph features

Cohere ▷ #general (44 messages🔥):

  • Cohere Python Library
  • Cohere Discord Bot
  • Spam Awareness
  • Fireside Chats with Max Welling
  • Job Postings and Engagement

Links mentioned:


Cohere ▷ #project-sharing (1 messages):

  • Automatic post categorization
  • Channel specific categorization
  • Prompt adjustments

LangChain AI ▷ #general (9 messages🔥):

  • finetuning pipeline
  • Bengali chatbot
  • timestamps in MessageGraph
  • community help

LangChain AI ▷ #share-your-work (3 messages):

  • Automatic 1111 SD with 1.5
  • Browser RAG using LangChain & WebLLM
  • Launch of Verbis
  • Open-Source GenAI Models

Link mentioned: Use 100% Browser Only WebLLM to Answer Questions!: In this video, I use Visual Agents to drop a WebLLM chat model onto my canvas and instantly start asking it questions.


OpenAccess AI Collective (axolotl) ▷ #general (2 messages):

  • PyTorch tunner
  • Training instruction models
  • Context length adjustment
  • Mistral's chat template issues

OpenAccess AI Collective (axolotl) ▷ #axolotl-dev (4 messages):

  • Pull Request Created
  • Discussion on DPO
  • Integrating Work

OpenAccess AI Collective (axolotl) ▷ #axolotl-phorm-bot (5 messages):

  • lora_target_linear
  • LoRA configuration
  • Axolotl fine-tuning

Link mentioned: OpenAccess-AI-Collective/axolotl | Phorm AI Code Search: Understand code, faster.


Torchtune ▷ #announcements (1 messages):

  • Torchtune v0.2.0 release
  • New models and recipes
  • Dataset improvements
  • Community contributions

Link mentioned: Release v0.2.0 · pytorch/torchtune: Overview It’s been awhile since we’ve done a release and we have a ton of cool, new features in the torchtune library including distributed QLoRA support, new models, sample packing, and more! Chec...


Torchtune ▷ #general (5 messages):

  • Eval loss calculation
  • Checkpoint optimization
  • Recipe modification
  • Data split and evaluation

Links mentioned:


Torchtune ▷ #dev (1 messages):

  • Scaling RoPE Embeddings
  • Long Context Modeling

Link mentioned: [RFC] Adding RoPE scaling methods to support long context modeling · Issue #1183 · pytorch/torchtune: Background For large document understanding or tasks like code completion, it's often beneficial to have a large context length e.g. > 8K. In order for this to be enabled by default, a model wo...


LAION ▷ #general (3 messages):

  • ComfyUI malicious node attack
  • Disney attacks
  • FBI involvement in Disney attacks

LAION ▷ #research (1 messages):

nodja: https://mistral.ai/news/codestral-mamba/


LAION ▷ #resources (1 messages):

ctrlaltdel: https://youtu.be/pj8CtzHHq-k


OpenInterpreter ▷ #general (1 messages):

jbexta: I'll try to get a demo/tutorial out this week 👍


OpenInterpreter ▷ #O1 (4 messages):

  • Open Interpreter usage with RayBan Stories
  • Rooting RayBan Stories glasses
  • Opinion on hacking via app
  • Google Glass alternative
  • O1 Light and hardware preorder updates

Links mentioned:


LLM Finetuning (Hamel + Dan) ▷ #general (3 messages):

  • GPT-4o fine-tuning access
  • OpenPipeAI support

Link mentioned: Tweet from Kyle Corbitt (@corbtt): If you ever felt the need for an Extremely Overpowered fine-tuned model... we now support training GPT-4o in @OpenPipeAI. Please use responsibly. 😎


tinygrad (George Hotz) ▷ #general (2 messages):

  • Intermediate language of tinygrad
  • Debugging and visualization in tinygrad

Mozilla AI ▷ #announcements (1 messages):

  • Open Interpreter
  • Mike Bird presentation






{% else %}

The full channel by channel breakdowns have been truncated for email.

If you want the full breakdown, please visit the web version of this email: [{{ email.subject }}]({{ email_url }})!

If you enjoyed AInews, please share with a friend! Thanks in advance!

{% endif %}