**Encoder-only models are all you need.**

AI News for 12/18/2024-12/19/2024. We checked 7 subreddits, 433 Twitters and 32 Discords (215 channels, and 4745 messages) for you. Estimated reading time saved (at 200wpm): 440 minutes. You can now tag @smol_ai for AINews discussions!

As he has been teasing for a few months, Jeremy Howard and the Answer.ai/LightOn team released ModernBert today, updating the classic BERT from 2018:

image.png

The HuggingFace blogpost goes into more detail on why this is useful:

  • Context: Old BERTS had ~500 token context; ModernBERT has 8k
  • Data: Old BERTS were on older/less data; ModernBERT was trained on 2T, including “a large amount of code”
  • Size: LLMs these days are >70B, with the requisite cost and latency issues; ModernBERT is 139M (base)/395M (large) params
  • SOTA perf for size: beats regular Kaggle winners like DeBERTaV3 on all retrieval/NLU/code categories image.png
  • Real world variable length long context: input sizes vary in the real world, so that’s the performance we worked hard to optimise – the “variable” column. As you can see, for variable length inputs, ModernBERT is much faster than all other models. image.png
  • Bidirectional: Decoder-only models are specifically constrained against “looking ahead”, whereas BERTS can fill in the blanks:
import torch
from transformers import pipeline
from pprint import pprint

pipe = pipeline(
    "fill-mask",
    model="answerdotai/ModernBERT-base",
    torch_dtype=torch.bfloat16,
)

input_text = "One thing I really like about the [MASK] newsletter is its ability to summarize the entire AI universe in one email, consistently, over time. Don't love the occasional multiple sends tho but I hear they are fixing it."
results = pipe(input_text)
pprint(results)

One of the MANY interesting details disclosed in the paper is the Alternating Attention layers - mixing global and local attention in the same way Noam Shazeer did at Character (our coverage here):

image.png


{% if medium == ‘web’ %}

Table of Contents

[TOC]

{% else %}

The Table of Contents and Channel Summaries have been moved to the web version of this email: [{{ email.subject }}]({{ email_url }})!

{% endif %}


AI Twitter Recap

all recaps done by Claude 3.5 Sonnet, best of 4 runs.

AI Model Releases and Performance

Major Company News

Technical Developments

  • @DrJimFan discussed advances in robotics simulation, highlighting trends in massive parallelization and generative graphics
  • @_philschmid shared details about Genesis, a new physics engine claiming 430,000x faster than real-time simulation
  • @krandiash outlined challenges in extending context windows and memory in AI models

Memes and Humor


AI Reddit Recap

/r/LocalLlama Recap

Theme 1. Bamba: Inference Efficient Hybrid Mamba2 Model

  • Bamba: Inference-Efficient Hybrid Mamba2 Model 🐍 (Score: 60, Comments: 14): Bamba is an inference-efficient hybrid model based on Mamba2. The post title suggests a focus on performance gaps and new benchmarks related to this model, though no further details are provided in the body.
    • Benchmark Gaps: Discussions highlight that the Bamba model shows gaps in math benchmarks, similar to other linear models, due to the training data and the inclusion of benchmark-aligned instruction datasets during training phases. A specific example mentioned is the improvement in the GSM8k score from 36.77 to 60.0 by adding metamath data.
    • Openness in Methodology: Commenters appreciate the transparency in the training and quantization processes of the Bamba model, expressing enthusiasm for the forthcoming paper that promises detailed insights into data sources, ratios, and ablation techniques.
    • Model Naming Humor: There is a lighthearted exchange about the naming convention of models like Bamba, Zamba, and others, with links provided to related papers and models on Hugging Face (Zamba-7B-v1, Jamba, Samba).

Theme 2. Genesis: Generative Physics Engine Breakthrough

  • New physics AI is absolutely insane (opensource) (Score: 1350, Comments: 147): The post discusses an open-source physics AI called Genesis, highlighting its impressive generative and physics engine capabilities. The lack of a detailed text description suggests that the video linked may provide further insights into its functionalities and applications.

    • Skepticism and Concerns: Many commenters express skepticism about the project, comparing it to other hyped technologies like Theranos and Juicero, and suggesting that the affiliations and “open-source” claims may be overstated. MayorWolf and others doubt the authenticity of the video, suggesting it involves creative editing and that the open-source aspect may be limited to what’s already available in tools like Blender.
    • Technical Discussion: Some users discuss the technical aspects, such as the use of Taichi for efficient GPU simulations, and the potential similarities to Nvidia’s Omniverse. AwesomeDragon97 notes a flaw in the simulation regarding water droplet adhesion, indicating the need for further refinement in the physics engine.
    • Project Legitimacy: Links to the project’s website and GitHub repository are shared, with some users noting the involvement of top universities and suggesting it could be legitimate. Others, like Same_Leadership_6238, highlight that while it may seem too good to be true, it is open source and warrants further investigation.
  • Genesis project: a generative physics engine able to generate 4D dynamical worlds powered by a physics simulation platform (Score: 103, Comments: 13): The Genesis project introduces a generative physics engine capable of creating 4D dynamical worlds using a physics simulation platform, developed over 24 months with contributions from over 20 research labs. This engine, written in pure Python, operates 10-80x faster than existing GPU-accelerated stacks and offers significant advancements in simulation speed, being ~430,000 times faster than real-time. It is open-source and aims to autonomously generate complex physical worlds for robotics and physical AI applications.

    • Generative physics engine allows for simulations where robots, including soft robots, can experiment and refine their actions far faster than real-world trials, potentially revolutionizing robotics and physical AI applications.
    • The impact on simulations and animations is substantial, enabling individuals with access to consumer-grade hardware like an NVIDIA 4090 to train robots for real-world applications, which was previously limited to entities with significant resources.
    • Skepticism exists about the technology’s capabilities due to its impressive claims, with users expressing a desire to personally test the engine to validate its performance.

Theme 3. Slim-Llama ASIC Processor’s Efficiency Leap

  • Slim-Llama is an LLM ASIC processor that can tackle 3-bllion parameters while sipping only 4.69mW - and we’ll find out more on this potential AI game changer very soon (Score: 240, Comments: 25): Slim-Llama is an LLM ASIC processor capable of handling 3 billion parameters while consuming only 4.69mW of power. More details about this potentially significant advancement in AI hardware are expected to be revealed soon.
    • There is skepticism about the Slim-Llama’s performance, with concerns over its 3000ms latency and the practicality of its 5 TOPS at 1.3 TOPS/W power efficiency. Critics argue that the 500KB memory is insufficient for running a 1B model without external memory, which would increase energy consumption (source).
    • The Slim-Llama supports only 1 and 1.5-bit models and is seen as an academic curiosity rather than a practical solution, with potential applications in wearables, IoT sensor nodes, and energy-efficient industrial applications due to its low power consumption of 4.69mW. Some commenters express hope for future use cases with improved 4-bit quantization and better software support.
    • Discussion includes the chip’s 20.25mmÂČ die area using Samsung’s 28nm CMOS technology, with curiosity about its potential performance on more advanced processes like 5nm or 3nm. There is also playful banter about running Enterprise Resource Planning simulations on the “SLUT-based BMM core,” highlighting the chip’s novelty and niche appeal.

Theme 4. Gemini 2.0 Flash Thinking Experimental Release

  • Gemini 2.0 Flash Thinking Experimental now available free (10 RPM 1500 req/day) in Google AI Studio (Score: 73, Comments: 10): Gemini 2.0 Flash Thinking Experimental is now available for free in Google AI Studio, allowing users 10 requests per minute and 1500 requests per day. The interface includes system instructions for answering queries like “who are you now?” and allows adjustments in model selection, token count, and temperature settings.
    • A user humorously described a thinking process example where the model counted the occurrences of “r” in “strawberry” but noted a misspelling, highlighting the model’s step-by-step reasoning.
    • There is curiosity about the potential to utilize the output from Gemini 2.0 Flash Thinking for training additional thinking models, suggesting interest in model improvement and development.

Other AI Subreddit Recap

/r/Singularity, /r/Oobabooga, /r/MachineLearning, /r/OpenAI, /r/ClaudeAI, /r/StableDiffusion, /r/ChatGPT

Theme 1. Gemini 2.0 Flash Thinking released, outperforming older models

  • Gemini 2.0 Flash Thinking (reasoning, FREE) (Score: 268, Comments: 95): Gemini 2.0 Flash, a reasoning model by Google, is now available for free at aistudio.google.com, offering up to 1500 free requests per day with a 2024 knowledge cutoff. The author finds it impressive, particularly for its ability to be steered via system prompts, and notes that it performs on par or better than OpenAI’s GPT-3.5 for tasks like image processing, general questions, and math, criticizing the cost and limitations of OpenAI’s offering.

    • Users are impressed with Gemini 2.0 Flash’s performance, noting its superiority in math compared to other models and its ability to display its reasoning process, which some find remarkable. There is a general sentiment that it outperforms OpenAI’s offerings, with users questioning the value of paying for ChatGPT Plus.
    • There is a discussion on Google’s strategic advantage due to their cost-effective infrastructure, specifically their TPUs, which allows them to offer the model for free, in contrast to OpenAI’s expensive and closed models. This cost advantage is seen as a potential long-term win for Google in the AI space.
    • Some users express a desire for improved UI/UX in Google’s AI products, suggesting that a more user-friendly interface could enhance their appeal. The conversation also touches on the absence of web search capabilities in Gemini, and the potential for custom instructions in AI Studio, which enhances user control over the model’s responses.
  • O1’s full LiveBench results are now up, and they’re pretty impressive. (Score: 267, Comments: 85): OpenAI’s “o1-2024-12-17” model leads in the LiveBench results, showing superior performance particularly in Reasoning and Global Average scores. The table compares several models across metrics like Coding, Mathematics, and Language, with competitors from Google, Alibaba, and Anthropic.

    • There is significant discussion about the O1 model’s pricing and performance. Some users argue that O1 is more expensive than Opus due to “invisible thought tokens”, leading to a cost of over $200 per mTok output, while others claim the price is the same but costs accumulate due to reasoning tokens (source).
    • O1’s capabilities and access are debated, with some noting that the O1 Pro API isn’t available yet and that the current O1 model uses a “reasoning_effort” parameter, which affects its performance and pricing. This parameter indicates that O1 Pro might be a more advanced version with higher reasoning effort.
    • Comparisons with other models like Gemini 2.0 Flash are prevalent, with Gemini noted for its cost-effectiveness and potential for scaling up. Some speculate that Gemini’s efficiency is due to Google’s TPU resources, and there’s optimism about future advancements leading to “in-the-box-AGI” within 1-2 years.
  • The AI race over time by Artificial Analysis (Score: 157, Comments: 12): The report from Artificial Analysis provides a comprehensive overview of the AI race, focusing on the evolution of AI language models from OpenAI, Anthropic, Google, Mistral, and Meta. A line graph illustrates the “Frontier Language Model Intelligence” over time, using the “Artificial Analysis Quality Index” to compare model quality from Q4 2022 to Q2 2025, highlighting trends and advancements in AI development. Full report here.

    • Gemini 2.0 is considered superior to the current GPT-4o model in all aspects, and it is available for free on Google AI Studio.
    • There is a correction regarding the timeline: GPT-3.5 Turbo was not available in 2022; instead, GPT-3.5 Legacy was available during that period.

Theme 2. NotebookLM incorporates interactive podcast feature

  • Notebook LM interaction BETA. MindBlown. (Score: 272, Comments: 69): Google has quietly activated an interaction feature in NotebookLM, allowing users to interact with generated podcasts. The post expresses excitement over this new capability, describing it as “mindblowing.”
    • Users discussed the interaction feature in NotebookLM, noting that it allows real-time conversation with AI about uploaded source material. However, the interaction remains surface-level, and users expressed a desire for deeper conversational capabilities and better prompt responses compared to ChatGPT.
    • The feature requires creating a new notebook and adding sources to generate an audio overview. Interaction begins after the audio is ready, but some users noted it lacks the ability to save or download the interacted podcast, and availability may vary by region.
    • There is a mixed reaction to Google’s advancements in AI, with some users expressing skepticism about Google’s position in the AI race and others noting the feature’s utility for studying, while comparisons were made to OpenAI’s recent updates, which some found underwhelming.

AI Discord Recap

A summary of Summaries of Summaries by o1-2024-12-17

Theme 1. Fierce Model Wars and Bold Price Cuts

Theme 2. Multi-GPU and Fine-Tuning Frenzy

  • Unsloth Preps GPU Magic: Multi-GPU support lands in Q1, with the team testing enterprise pricing and sales revamps. They confirm Llama 3.3 needs around 41GB VRAM to fine-tune properly.
  • SGLang vs. vLLM in a Performance Duel: vLLM wins for raw throughput, while SGLang excels in structured outputs and scheduling. Engineers weigh trade-offs, citing SGLang’s flexible modular approach for certain workflows.
  • Quantization Saves the Day: Threads tout 4-bit or 8-bit quantization to shrink memory footprints. Contributors highlight “RAG plus quantization” as an efficient path for resource-limited tasks.

Theme 3. Agents, RAG, and RLHF Breakthroughs

  • Agentic Systems Race Ahead: Anthropic’s “year of agentic systems” blueprint outlines composable patterns, fueling speculation of major leaps by 2025. Researchers compare these designs to classical search and note how open thinking patterns can surpass naive retrieval.
  • Asynchronous RLHF Powers Faster Training: A paper proposes off-policy RLHF, decoupling generation and learning to speed up language model refinement. The community debates “how much off-policyness can we tolerate?” in pursuit of efficiency.
  • Multi-Agent LlamaIndex Unleashes RAG: Developers shift from single to multi-agent setups, each focusing on a specialized subtask for robust retrieval-augmented generation. They use agent factories to coordinate tasks and ensure better coverage over large corpora.

Theme 4. AI Tools for Coding Take Center Stage

  • Cursor 0.44.4 Upgrades: The launch introduced “Yolo mode” and improved agent commands, touted in the changelog. Early adopters noticed faster code edits and better task handling in large projects.
  • GitHub Copilot Chat Goes Free: Microsoft announced a no-credit-card-needed tier that even taps “Claude for better capabilities.” Devs cheer cost-free real-time code suggestions, although some still prefer old-school diff editing for version control.
  • Windsurf vs. Cursor Showdown: Users compare collaborative editing, large-file handling, and performance. Many mention Cursor’s consistency for complex refactors, while some appreciate Windsurf’s flexible UI for smaller tasks.

Theme 5. Fresh Libraries and Open-Source Adventures

  • Genesis AI Conjures Physics Realities: A new generative engine simulates 4D worlds 430,000 times faster than real-time. Robotics fans marvel at 26-second training runs on an RTX4090, showcased in the Genesis-Embodied-AI/Genesis repo.
  • ModernBERT Takes a Bow: This “workhorse model” offers extended context and improved classification or retrieval over older BERT. Community testers confirm better performance and simpler optimization for RAG workflows.
  • Nomic Maps Data in the Browser: The final post in their Data Mapping Series shows how scalable embeddings and dimensionality reduction democratize massive dataset visualization. Readers laud it as a game-changer for exploratory analysis.

PART 1: High level Discord summaries

Unsloth AI (Daniel Han) Discord

  • Unsloth Preps Multi-GPU Magic: Multi-GPU support for Unsloth is slated for Q1, and the team is fine-tuning pricing details alongside final tests.
    • They also hinted at revamping sales processes for enterprise interest, though their enterprise beta remains in a testing phase.
  • Llama 3.3 Ramps Up Power: The Llama 3.3 model demands roughly 41GB of VRAM to fine-tune, as noted in Unsloth’s blog.
    • Participants reported higher performance in contrast to earlier versions, pointing to the benefits of careful training cycles on large datasets.
  • SGLang vs. vLLM: The Speed Showdown: Many agreed vLLM outpaces SGLang for hefty production tasks, but SGLang v0.4 looks promising for structured outputs and scheduling tricks.
    • Community members consider vLLM stronger for throughput, while SGLang appeals to those optimizing modular results.
  • RAG Meets Quantization: Retrieval-Augmented Generation (RAG) appeared as a smarter alternative to direct fine-tuning when resources are tight, often employing chunked data and embeddings for context retrieval.
    • Users praised quantization (see Transformers Docs) to shrink memory footprints without completely sacrificing performance.
  • LoRAs, Merging & Instruction Tuning Warnings: Combining Low Rank Adapters (LoRAs) with base models, possibly saved as GGUF options, requires careful parameter balancing to avoid unwanted distortions.
    • An instruction tuning paper highlighted how partial training can degrade core knowledge, underscoring the hazards of merging multiple techniques without thorough validation.

Cursor IDE Discord

  • Cursor 0.44.4 Launches with Agent Boost: Cursor 0.44.4 introduced improved agent features, Yolo mode, and is available here.
    • Engineers applauded its faster command execution and better task handling, citing the changelog for a detailed breakdown.
  • Coin Toss: O1 vs Sonnet 3.5: Users pinned O1 at around 40 cents per request and compared its gains to Sonnet 3.5.
    • Some considered Sonnet 3.5 ‘good enough,’ while others questioned if O1’s extra cost is worth the difference.
  • Build It Up: Framer vs. DIY Code: A lively discussion contrasted Framer for rapid site creation with fully custom code.
    • Some praised the time savings, while others preferred complete control over performance and flexibility.
  • Gemini-1206 Gains Curiosity: Members showed interest in Gemini-1206, but concrete evidence of its abilities remains scarce.
    • Others stayed focused on Sonnet 3.5 for coding, since they lacked extensive data on Gemini-1206.
  • College or Startup: The Great Showdown: Some argued Ivy League credentials offer networking perks, while others favored skipping school to build real-world products.
    • Opinions varied, with personal success stories suggesting any path can yield major breakthroughs.

Codeium (Windsurf) Discord

  • Cline & Gemini Triumph Together: Multiple members praised Cline v3 combined with Gemini 2.0 for smoother coding and large-task handling.
    • They noted that it outperformed other setups, mainly due to faster iterations and more stable refactoring capabilities.
  • Windsurf vs Cursor Showdown: Comparisons referenced this direct breakdown on features like collaborative editing and file handling.
    • Opinions seemed divided, but many cited Cursor’s more consistent performance as a critical advantage in code-heavy workflows.
  • Credit Rollover Reassurance: Users confirmed flex credits roll over in Codeium’s paid plan, ensuring no sudden interruptions.
    • Some participants shared relief about not losing credits after paying, highlighting the importance of stable subscription models.
  • Claude vs Gemini Model Chatter: Community members weighed performance differences between Claude Sonnet, Gemini, and other AI models while referencing Aider LLM Leaderboards.
    • They stressed the need for contextual prompts and thorough documentation to fully leverage each model’s coding potential.

Interconnects (Nathan Lambert) Discord

  • Gemini 2.0 Flashes a ‘Think Out Loud’ Trick: Google introduced Gemini 2.0 Flash Thinking, an experimental model that trains explicit chain-of-thought for enhanced reasoning and speed in chatbot tasks.
  • OpenAI Sings with Voice Mode: OpenAI rolled out Work with Apps in voice mode, enabling integration with apps like Notion and Apple Notes as teased on their 12 Days of ChatGPT site.
    • Members called this a straightforward but major step in bridging ChatGPT with real-world productivity, with some hoping advanced voice features could power daily tasks.
  • Chollet’s ‘o1 Tiff’ Rattles LLM Circles: François Chollet equated labeling o1 as an LLM to calling AlphaGo ‘a convnet’, inciting heated arguments on X.
    • Community members noted this parallels the old Subbarao/Miles Brundage incident, with calls for clarity on o1’s architecture fueling further drama.
  • FineMath: Gigantic Gains for LLM Arithmetic: A link from @anton_lozhkov showcased FineMath, a math-focused dataset with over 50B+ tokens, promising boosts over conventional corpora.
    • Participants saw this as a big leap for complex code math tasks, referencing merging FineMath with mainstream pretraining to handle advanced calculations.
  • RLHF Book: Spot a Typo, Score Free Copies: An RLHF resource was mentioned to be on GitHub, where volunteers who catch typos or formatting bugs qualify for free copies of the book.
    • Eager contributors found it less stressful to refine reinforcement learning fundamentals this way, calling the process both fun and beneficial for the community.

OpenAI Discord

  • Day 11 of OpenAI Delivers ChatGPT Boost: Day 11 of the 12 Days of OpenAI introduces a new approach for ChatGPT, featuring a YouTube live session that highlights advanced code collaboration.
    • Engineers can now broaden daily development cycles with AI assistance, although manual copy actions remain necessary.
  • ChatGPT Integrates with XCode: Participants discussed copying code from ChatGPT straight into XCode, smoothing iOS dev tasks.
    • This step promises convenience but still depends on user-initiated triggers for actual code insertion.
  • Google’s Gemini 2.0 Hits the Spotlight: Google published the Gemini 2.0 Flash Thinking experimental model, attracting curiosity with bold performance claims.
    • Some participants doubted the model’s reliability after it stumbled on letter-count tasks, fueling skepticism about its real prowess.
  • YouTube Clone Demo with ChatGPT: Members explored using ChatGPT to craft a YouTube-like experience, covering front-end and back-end solutions.
    • Though front-end tasks seemed straightforward, the server-side setup demanded more steps through terminal instructions.
  • AI Automation Heats Up the Engineering Floor: Conversations centered on the prospect of AI fully automating software development, reshaping the demand for human engineers.
    • While many recognized potential time-savings, others wondered if hype was outpacing actual breakthroughs.

Eleuther Discord

  • FSDP vs Tensor Parallel Tangle: At Eleuther, participants compared Fully Sharded Data Parallel (FSDP) to Tensor Parallelism, referencing llama-recipes for real-world implementations.
    • They argued about higher communication overhead in FSDP and weighed that against the direct parallel ops advantage of tensor-based methods, with some voicing concern about multi-node scaling limits.
  • NaturalAttention Nudges Adam: A member highlighted a new Natural Attention Optimizer on GitHub that modifies Adam with attention-based gradient adjustments, backed by proofs in Natural_attention_proofs.pdf.
    • They claimed notable performance gains, though some cited potential bugs in the code at natural_attention.py and suggested caution when replicating results.
  • Diffusion vs Autoregressive Arm-Wrestle: A discussion emerged contrasting diffusion and autoregressive models across image and text domains, highlighting efficiency tradeoffs and discrete data handling.
    • Some posited that diffusion leads in image generation but might be challenged by autoregressive approaches in tasks that require token-level control.
  • Koopman Commotion in NNs: Members debated applying Koopman theory to neural networks, referencing Time-Delay Observables for Koopman: Theory and Applications and Learning Invariant Subspaces of Koopman Operators—Part 1.
    • They questioned the legitimacy of forcing Koopman methods onto standard frameworks, suggesting it might mislead researchers if underlying math doesn’t align with real-world activation behaviors.
  • Steered Sparse AE OOD Queries: In interpretability discussions, enthusiasts explored steered sparse autoencoders (SAE) and whether cosine similarity checks on reconstructed centroids effectively gauge out-of-distribution data.
    • They reported that adjusting one activation often influenced others, indicating strong interdependence and prompting caution in interpreting SAE-based OOD scores.

Perplexity AI Discord

  • Perplexity’s Referral Program Boosts Sign-Ups: Multiple users confirmed that Perplexity offers a referral program granting benefits for those who link up with new sign-ups.
    • Enthusiasts aim to recruit entire fraternities, accelerating platform reach and energizing discussions about user growth.
  • You.com Imitation Raises Accuracy Concerns: Community members discussed You.com replicating responses with search-based system instructions, questioning the quality of its output.
    • They noted that relying on direct model calls often produces more precise logic, revealing potential gaps in search-oriented Q&A solutions.
  • Game Descriptions Overwhelm Translation Limits: A user attempting to convert lengthy lists to French encountered size restrictions, showing Perplexity AI’s text-handling constraints.
    • They sought advice on segmenting content into smaller chunks, hoping to bypass these limitations in complex translation tasks.
  • Magic Spell Hypothesis Sparks Curiosity: A posted document described the Magic Spell Hypothesis, linking advanced linguistic patterns to emerging concepts in scientific circles.
    • Researchers and community members weighed its credibility, applauding attempts to test fringe theories in structured experiments.

aider (Paul Gauthier) Discord

  • Gemini Gains Ground: On 12/19, Gemini 2.0 Flash Thinking emerged with the gemini-2.0-flash-thinking-exp-1219 variant, touting better reasoning in agentic workflows as shown in Jeff Dean’s tweet.
    • Initial tests revealed faster performance than O1 and deepseek, and some community members applauded its upgraded output quality.
  • Aider & MCP Get Cozy: Users achieved Aider and MCP integration for streamlined Jira tasks, referencing Sentry Integration Server - MCP Server Integration.
    • They discussed substituting Sonnet with other models in MCP setups, suggesting top-notch flexibility for error tracking and workflow automation.
  • OpenAPI Twinning Madness: Community members explored running QwQ on Hugging Face alongside local Ollama, clarifying that Hugging Face mandates its own API usage for seamless model switching.
    • They discovered the need to indicate the service in the model name, preventing confusion in multi-API setups.
  • Copilot Chat Spices Up: GitHub Copilot Chat introduced a free immersive mode as stated in GitHub’s announcement, offering real-time code interactions and sharper multi-file edits.
    • While users appreciated the enhanced chat interface, some still preferred old-school diff edits to contain costs and maintain predictable workflows.

Stackblitz (Bolt.new) Discord

  • Bolt & Supabase Spark Instant Setup: The Bolt & Supabase integration is officially live, offering simpler one-click connections as shown in this tweet from StackBlitz. It eliminates the manual steps, letting engineers unify services more quickly and reduce overhead.
    • Users praised the easy setup, noting how it shortens ramp-up time for data-driven applications and provides a frictionless developer experience.
  • Figma Frustrations & .env Woes: Users reported .env file resets that disrupt Firebase configurations, with locking attempts failing after refresh and causing ‘This project exceeds the total supported prompt size’ errors.
    • Additionally, Figma direct uploads are off the table, forcing designers to rely on screenshots while requesting more robust design-to-dev integrations.
  • Redundancy Rehab & Public Folder Setup: Community members asked if Bolt could analyze code for redundant blocks, aiming to cut token use in large-scale apps. They also needed clarity on building a public folder to host images, highlighting confusion about project structure.
    • Some suggested straightforward docs to resolve folder-setup uncertainties, indicating a desire for simpler references when working with Bolt.
  • Session Snafus & Token Tangles: Frequent session timeouts and forced page refreshes left many losing chat histories in Bolt, driving up frustration and token costs. The dev team is investigating these authentication issues, but real-time disruptions persist.
    • Users hope for fixes that reduce redundant outputs and control overspending on tokens, seeking stability in their project workflows.
  • Community Convergence for Guides & Integrations: Participants plan a broader guide for Bolt, providing a user dashboard for submitting and approving resources. The conversation touched on Stripe integration, advanced token handling, and synergy with multiple tech stacks.

Notebook LM Discord Discord

  • Interactive Mode Reaches Everyone: The development team confirmed Interactive Mode reached 100% of users with notable improvements for audio overviews.
    • Enthusiasts praised the creative possibilities and shared firsthand experiences of smoother deployment.
  • MySQL Database Hook for Automatic NPCs: A game master asked how to connect a large MySQL database to NotebookLM for automating non-player character responses.
    • They highlighted a decade of stored RPG data and sought ideas for managing dynamic queries.
  • Podcasters Tweak Recording Setup: Members debated how the interactive podcast feature does not store conversations, forcing separate audio capture for external listeners.
    • A concise ‘podcast style prompt’ sparked interest in faster, more candid commentary for a QWQ model review.
  • AI-Generated Space Vlog Shakes Viewers: A user showcased a year-long astronaut isolation vlog rendered by AI, linked at this YouTube link.
    • Others noted daily animated uploads driven by NotebookML outputs, demonstrating consistent content production.
  • Updated UI Gains Kudos: Users applauded the NotebookLM interface overhaul, describing it as more receptive and convenient for project navigation.
    • They are eager to test its new layouts and praised the overall polished look.

Stability.ai (Stable Diffusion) Discord

  • Ubuntu Steps for SDXL: Some members shared tips for running SDXL on Ubuntu, advising the use of shell scripts from stable-diffusion-webui-forge for streamlined setups.
    • They underlined the importance of system knowledge to avoid performance bottlenecks.
  • ComfyUI Meltdown: Engineers complained about persistent errors and charred output from ComfyUI despite attempts to fix sampling issues.
    • They recommended using Euler sampling with well-tuned denoising levels to reduce flawed results.
  • AI Images Face Rocky Road to Perfection: Some argued AI-generated images and video won’t be flawless by 2030 due to current challenges.
    • Others countered that rapid technological leaps could deliver polished outputs much sooner.
  • Quantum Quarrel Over P=NP: A heated chat focused on quantum computing relevance if P=NP becomes reality.
    • Skeptics pointed to trouble extracting real-world value from quantum states, citing complexity in practical execution.
  • Civitai.com Down Again?: Multiple users noted frequent outages on civitai.com, making model access challenging.
    • They speculated recurring server problems are behind the repeated downtime.

GPU MODE Discord

  • GPU Glitter & Coil Whine: Users complained about absurd coil whine from a returned RX 6750XT, plus VRChat’s memory appetite prompting some to choose 4090s.
    • They also expressed worry about potentially bigger price tags for next-gen RTX 50 cards while comparing the 7900 XTX.
  • Triton Tinkers with AMD: Community members tested Triton kernels on AMD GPUs like the RX 7900, noting performance still lags behind PyTorch/rocBLAS.
    • They also discovered that warp-specialization was removed in Triton 3.x, driving them to explore alternative optimizations.
  • CARLA Zooms into UE 5.5: CARLA version 0.10.0 introduced Unreal Engine 5.5 features like Lumen and Nanite, boosting environment realism.
  • MatX’s HPC Hiring Spree: MatX announced open roles for low-level compute kernel authors and ML performance engineers, aiming to build an LLM accelerator ASIC.
    • The job listing emphasizes a high-trust environment that favors bold design decisions over extended testing.
  • Alma’s 40-Option Benchmark Bash: A duo released alma, a Python package checking the throughput of over 40 PyTorch conversion options in a single function call.
    • According to GitHub, it gracefully handles failures with isolated processes and will expand into JAX and llama.cpp soon.

Latent Space Discord

  • Anthropic Agents Amp Up: Anthropic posted Building effective agents with patterns for AI agentic systems, anticipating a major milestone in 2025.
    • They emphasized composable workflows, referencing a tweet about the ‘year of agentic systems’ for advanced design.
  • Gemini 2.0 Gains Speed: Multiple tweets, including lmarena.ai’s mention and Noam Shazeer’s announcement, praised Gemini 2.0 Flash Thinking for topping all categories.
    • The model trains to ‘think out loud’, enabling stronger reasoning and outdoing earlier Gemini versions.
  • Databricks Hauls $10B: They announced a Series J funding round worth $10B, hitting a $62B valuation with Thrive Capital leading.
    • They anticipate crossing $3B in revenue run rate, reporting 60% growth sparked by AI demand.
  • ModernBERT Steps Onstage: A new model called ModernBERT was introduced as a ‘workhorse’ option with extended context and improved performance.
    • References like Jeremy Howard’s mention show attempts to apply it in retrieval and classification, spurring conversation among practitioners.
  • Radford Says Farewell to OpenAI: Alec Radford, credited for the original GPT paper, left OpenAI to pursue independent research.
    • This shift stirred speculation about OpenAI’s upcoming directions in the industry.

OpenInterpreter Discord

  • OpenInterpreter’s Vision Variation: OpenInterpreter 1.0 now includes vision support, with an installation path via GitHub and pip install git+https://github.com/OpenInterpreter/open-interpreter.git@development.
    • Experiments suggest the --tools gui command functions properly for bridging different models or APIs, with people noting local or SSH-based usage.
  • Server Mode Sparks Execution Queries: Members questioned how server mode handles command execution, debating whether tasks run locally or on the server.
    • They mentioned using SSH for simpler interaction and proposed a front end for improved workflow.
  • Google Gemini 2.0 Gains Attention: A user showed interest in Google Gemini 2.0 for multimodal tasks within OS mode, hoping for highly capable command execution.
    • They compared it to existing setups and wondered if it competes effectively with other systems.
  • Cleaning Installs & O1 Confusion: Some users faced issues with OpenInterpreter installation after multiple configurations, prompting them to remove flags for a new setup.
    • Meanwhile, an O1 channel user complained about unclear docs, seeking direct guidance even after reading official references.

LM Studio Discord

  • Safetensors Snafu Stumps LM Studio: Users encountered Safetensors header is unexpectedly large: bytes=2199142139136 errors when loading models, forcing redownloads of the MLX version of Llama 3.3 to fix possible corruption issues.
    • Discussions mentioned conflict with file compatibility, with some users suggesting a careful file check for future downloads.
  • Mobile Dreams: iOS Gains Chat, Android Waits: An iOS app called 3Sparks Chat (link) connects to LM Studio on Mac or PC, providing a handheld interface for local LLMs.
    • Members expressed disappointment about the lack of an Android client, leaving the community requesting alternative solutions.
  • AMD’s 24.12.1 Distress: The AMD 24.12.1 drivers triggered system stuttering and performance loss when loading models with LM Studio, connecting to llama.cpp rocm libraries.
    • Downgrading drivers resolved problems in some setups, and references to the 7900XTX GPU emerged as a concern for stability.
  • Vision Model Hopes in LM Studio: A query about image input models led to mention of mlx-community/Llama-3.2-11B-Vision-Instruct-4bit, highlighting early attempts at integrating visual features.
    • Users reported loading problems on Windows, fueling debate about model compatibility with local hardware.
  • Apple Silicon vs. 4090 GPU Showdown: Community members questioned if Mac Pro and Ultra chips outperform a 30 or 4090 card due to memory bandwidth advantages.
    • Benchmark references pointed to the llama.cpp GitHub discussion, where users confirmed the 4090 still holds faster metrics in practical tests.

OpenRouter (Alex Atallah) Discord

  • Price Cuts Rattle the LLM Market: This morning saw a 7% cut for gryphe/mythomax-l2-13b, 7.7% for qwen/qwq-32b-preview, and a 12.5% slash on mistralai/mistral-nemo.
    • Community members joked about ‘ongoing price wars’ fueling competition among providers.
  • Crowdsourced AI Stack Gains Spotlight: VC firms have released various ecosystem maps, but there’s a push for a truly crowdsourced and open-source approach showcased in this GitHub project.
    • One user requested feedback on the proposed logic, encouraging the community to ‘contribute to a dynamic developer resource’.
  • DeepSeek Speeds Code Learning: Developers used DeepSeek V2 and DeepSeek V2.5 to parse entire GitHub repositories, reporting significant improvements in project-wide optimization.
    • However, a user cautioned that ‘it may not handle advanced code generation’, and they still praised its annotation abilities.
  • Calls for Programmatic API Keys: A discussion emerged about allowing a provider API key to be sent implicitly with requests, streamlining integration.
    • One user said ‘I’d love to see a programmatic version’ to enhance developer convenience across the board.

Nous Research AI Discord

  • GitHub Copilot Goes Free: Microsoft introduced a new free tier for GitHub Copilot with immediate availability for all users.
    • It surprisingly includes Claude for improved capabilities, and no credit card is required.
  • Granite 3.1-8B-Instruct Gains Fans: Developers praised the Granite 3.1-8B-Instruct model for strong performance on long context tasks.
    • It provides quick results for real-world cases, and IBM offers code resources on GitHub.
  • LM Studio Enables Local LLM Choices: LM Studio simplifies running Llama, Mistral, or Qwen models offline while supporting file downloads from Hugging Face.
    • Users can also chat with documents quickly, appealing to folks wanting an offline approach.
  • Fine-Tuning Uniform Instruction Sparks Debate: Questions arose about using the same instruction for every prompt in a Q&A dataset.
    • A caution was raised that it might cause suboptimal model performance due to repetitive usage.
  • Genesis Project Roars with Generative Physics: The Genesis engine builds 4D dynamical worlds at speeds up to 430,000 times faster than real-time.
    • It’s open source, runs in Python, and slashes robotic training to just 26 seconds on a single RTX4090.

Modular (Mojo đŸ”„) Discord

  • Negative Indexing Showdown in Mojo: A heated discussion emerged about adopting negative indexing in Mojo, with some calling it an error magnet while others see it as standard practice in Python.
    • Opponents favored a .last() approach to dodge overhead, warning of performance issues with negative offsets.
  • SIMD Key Crash Rumbles in Dicts: A serious bug in SIMD-based struct keys triggered segmentation faults in Dict usage, detailed in GitHub Issue #3781.
    • Absent scaling_cur_freq caused these crashes, prompting a fix within a 6-week window.
  • Mojo Goes Rogue on Android: Enthusiasts tinkered with running Mojo on native Android via Docker-based hacks, though it’s labeled ‘wildly unsupported’.
    • Licensing rules prevent publishing a Docker image, but local custom builds remain possible.
  • Python Integration Teases SIMD Support: Participants discussed merging SIMD and conditional conformance with Python types, balancing separate handling for integral and floating-point data.
    • They highlighted ABI constraints and future bit-width expansions, stirring interest in cross-language interactions.

DSPy Discord

  • Synthetic Data Explainer Gains Steam: One contributor is building an explainer on how synthetic data is generated, requesting community input on tricky areas.
    • They plan to highlight creation approaches and performance implications for advanced models.
  • DataBricks Rate-Limiting Debate: Participants flagged big throughput charges, calling for a rate limiter to prevent overuse in DataBricks.
  • dspy.Signature as a Class: A user asked about returning a dspy.Signature in class form, aiming for structured outputs over raw strings.
    • They hope to define explicit fields for clarity and potential type-checking.
  • Provisioned Throughput Shocks Wallet: A conversation exposed high expense from provisioned throughput in DataBricks when it remains active.
    • Members advised the scale to 0 feature to curb costs during idle periods.
  • LiteLLM Reaches DataBricks: Attendees debated whether to embed the LiteLLM proxy within a DataBricks notebook or run it separately.
    • They agreed it’s feasible to integrate both approaches, given environment controls and resource needs.

LlamaIndex Discord

  • LlamaIndex’s Multi-Agent Makeover: A post described the jump from a single agent to a multi-agent system with practical code examples in LlamaIndex, referencing this link.
    • It also clarifies how agent factories manage multiple tasks working in tandem.
  • Vectara’s RAG Rally: An update showcased Vectara’s RAG strengths, including data loading and streaming-based queries, referencing this link.
    • It underscored agentic usage of RAG methods, with insights on reranking in a managed environment.
  • Vercel’s AI Survey Shout-Out: Community members were urged to fill out Vercel’s State of AI Survey, found here.
    • They plan to gather data on developer experiences, challenges, and target areas for future AI improvements.
  • Vision Parse for PDF Power: A new open-source Python library, Vision Parse, was introduced for converting PDF to well-structured markdown using advanced Vision Language Models.
    • Participants praised its potential to simplify document handling and welcomed open-source efforts for collective growth.

Nomic.ai (GPT4All) Discord

  • Nomic’s Data Mapping Marathon Ends: The final installment of the Data Mapping Series spotlights scalable graphics for embeddings and unstructured data in Nomic’s blog post.
    • This six-part saga demonstrates how techniques like dimensionality reduction empower users to visualize massive datasets within their web browsers.
  • BERT & GGUF Glitches Get Patched: Users faced issues loading Nomic’s BERT embedding model from Huggingface after a commit broke functionality, but the fix is now live.
    • Community members also flagged chat template problems in .GGUF files, with updated versions promised in the upcoming release.
  • Code Interpreter & System Loader Shine: A pull request proposes a code interpreter tool built on the jinja template for running advanced code tasks.
    • Simultaneously, users requested a more convenient system message loader to bypass manual copy-pasting of extensive context files.
  • GPT4All Device Specs Confirmed: A query about GPT4All system requirements led to a link detailing hardware support.
    • Important CPU, GPU, and memory details were highlighted to ensure a stable local LLM experience.

tinygrad (George Hotz) Discord

  • TinyChat Installation Tussle: One user ran into problems setting up TinyChat, reporting missing pieces like tiktoken and a 30-second system freeze, plus a puzzling prompt about local network devices.
    • George Hotz spoke about writing a tiktoken alternative within TinyGrad and flagged 8GB RAM as a constraint.
  • Mac Scroll Direction Goes Rogue: A user complained that running TinyChat flipped the scroll direction on their Mac, then reverted once the app closed.
    • George Hotz called this behavior baffling, acknowledging it as a strange glitch.
  • Bounty Push and Layout Talk: Contributors discussed bounty rewards to push tinygrad forward, stressing tests and improvements as key drivers.
  • Scheduler Query in #learn-tinygrad: A participant asked why the scheduler uses realize before expand or unsafe pad ops, with no clear explanation offered.
    • The group didn’t fully unpack the reasoning, leaving the topic open for further exploration.

Cohere Discord

  • Ikuo Impresses & Etiquette Ensues: Ikuo618 introduced himself with six years of experience in DP, NLP, and CV, spotlighting his Python, TensorFlow, and PyTorch skills.
    • A gentle reminder followed, advising members not to repost messages across channels for a cleaner conversation flow.
  • Platform Feature Question Marks: A user asked about a feature’s availability on the platform, and a member confirmed it’s still not live.
    • The inquirer expressed thanks, ending on a positive note with a smiley face.
  • Cohere Keys & Rate Limits Exposed: Cohere provides evaluation and production API keys, detailed on the API keys page and in the pricing docs.
    • Rate limits include 20 calls per minute for trial and 500 per minute for production on the Chat endpoint, with Embed and Classify sharing distinct quotas.

Torchtune Discord

  • Torchtune Teases Phi 4 & Roles: In the official Torchtune docs page, members confirmed that Torchtune only supports Phi 3 but welcomes contributions for Phi 4.
    • They introduced a Contributor role on Discord and noted minimal differences between Phi 3 and Phi 4 to simplify new pull requests.
  • Asynchronous RLHF Races Ahead: Asynchronous RLHF separates generation and learning for faster model training, detailed in “Asynchronous RLHF: Faster and More Efficient Off-Policy RL for Language Models”.
    • The paper questions how much off-policyness can we tolerate, pushing for speed without sacrificing performance.
  • Post-Training Gains Momentum: The Allen AI blog highlights that post-training is crucial after pre-training to ensure models follow human instructions safely.
    • They outline instruction fine-tuning steps and focus on preserving capabilities such as intermediate reasoning while specializing.
  • Instruction Tuning Tightrope: InstructGPT-style strategies can unwittingly diminish certain model abilities, especially if specialized tasks overshadow broader usage.
    • Retaining coding proficiency while handling poetic or general instructions emerged as the delicate balance to maintain.

LLM Agents (Berkeley MOOC) Discord

  • LLM Agents Hackathon Countdown: The submission deadline for the hackathon is 12/19 at 11:59 PM PST, with entries filed via the official Submission Form.
    • The community is on standby for last-minute fixes, making sure everyone has a fair shot before the clock hits zero.
  • Support for Eleventh-Hour LLM Queries: Participants can drop last-minute questions in the chat for quick feedback from peers.
    • Organizers urge coders to finalize checks promptly, avoiding frantic merges at the buzzer.

The MLOps @Chipro Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The Axolotl AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The LAION Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The Mozilla AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The HuggingFace Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The Gorilla LLM (Berkeley Function Calling) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


PART 2: Detailed by-Channel summaries and links

{% if medium == ‘web’ %}

Unsloth AI (Daniel Han) ▷ #general (352 messagesđŸ”„đŸ”„):

Unsloth's Multi-GPU Support, Llama 3.3 Fine-Tuning, SGLang vs. vLLM, Sales Strategy, FFT Support

  • Unsloth’s Multi-GPU Support in Q1: Multi-GPU support for Unsloth is in the pipeline and expected to launch in Q1, with current testing underway.
    • The team is evaluating pricing and licensing as they work towards finalizing this feature.
  • Fine-Tuning Llama 3.3 Requirements: Fine-tuning Llama 3.3 requires approximately 41GB of VRAM, as indicated on the Unsloth blog.
    • This model shows significant performance enhancements compared to previous versions when fine-tuned properly.
  • SGLang vs. vLLM Performance: The community discussed SGLang and vLLM with the consensus that vLLM generally offers better throughput for production inference tasks.
    • SGLang is considered useful for structured outputs, while vLLM provides greater performance in other areas.
  • Sales Strategy and Product Availability: There is a call for more streamlined sales processes for Unsloth, especially as interest in enterprise solutions grows.
    • While the enterprise product is still in beta, the team aims to gauge demand and adjust their sales approach accordingly.
  • FFT Engine Support: FFT is not currently supported in Unsloth but can be implemented manually by users.
    • Discussion highlighted that utilizing FFT could provide significant performance improvements over other training engines.

Links mentioned:


Unsloth AI (Daniel Han) ▷ #off-topic (117 messagesđŸ”„đŸ”„):

Adapters vs Models, Fine-tuning Challenges, Learning Resources for Fine-tuning, Instruction Tuning Limitations, Model Merging Techniques

  • Understanding Adapters and Models: Adapters, specifically Low Rank Adapters (LoRAs), modify a small subset of model parameters, allowing for flexible combinations without full model retraining.
    • To combine them, one can save models with GGUF options for simpler inference or manage them as separate files.
  • Navigating Fine-tuning Obstacles: Fine-tuning isn’t simply pressing a button; it requires understanding the underlying processes to avoid issues like catastrophic forgetting.
    • Members emphasized that successful fine-tuning depends on finding the right balance of adjustments and often involves multiple re-training cycles.
  • Recommended Learning Resources: Resources like DeepLearning.ai and Hugging Face documentation are suggested for deeper learning on fine-tuning and model training.
    • Participants stressed the importance of a strong foundational understanding beyond just fine-tuning techniques.
  • Insights on Instruction Tuning: An insightful paper highlighted that instruction tuning often fails to enhance model knowledge and can lead to knowledge degradation.
    • Members pointed out that dependency on external datasets can diminish response quality, thus reiterating the complexities involved in fine-tuning.
  • Exploring Model Merging Techniques: Experimenting with merging models can yield mixed results, as maintaining balance is critical to overcoming trade-offs between various techniques.
    • Merging techniques, including combining base instructions and LoRA adjustments, require careful management to avoid common pitfalls like loss of accuracy.

Links mentioned:


Unsloth AI (Daniel Han) ▷ #help (468 messagesđŸ”„đŸ”„đŸ”„):

Fine-tuning LLMs, RAG (Retrieval-Augmented Generation), Quantization, Using Google Colab and Kaggle for model training, JSON data formatting for models

  • Challenges in Fine-tuning LLMs: A user highlighted difficulties in fine-tuning LLMs due to hardware limitations, particularly using TinyLlama and struggling with a large dataset of 1GB in JSON format.
    • Despite the struggle, progress was made in fixing the environment and better understanding the training processes.
  • Introducing RAG for Enhanced Learning: The importance of Retrieval-Augmented Generation (RAG) was emphasized as a potentially more effective method than direct fine-tuning, especially when using a smaller model for specific tasks.
    • Participants discussed using techniques like chunking data and embedding to improve model performance and reduce the complexity of initial training.
  • Quantization for Efficient Resource Usage: Quantization techniques were discussed as a way to reduce memory and computational costs when training models, allowing for larger model sizes like 4-bit or 8-bit representations.
    • Users were advised to use proper quantization settings to avoid crashing their local machines during training.
  • Utilizing Online Platforms for Training: Google Colab and Kaggle were recommended as alternatives for accessing GPU resources without significant expense, particularly for users with limited local compute capacity.
    • Despite resistance to using cloud platforms, participants acknowledged their utility for initial learning and model testing.
  • Navigating JSON Data Formatting: Formatting JSON data correctly was identified as a critical step for successful model training, yet participants faced challenges with large datasets.
    • Improving the structure and formatting of JSON files was seen as necessary for utilizing RAG effectively and preparing for fine-tuning efforts.

Links mentioned:


Cursor IDE ▷ #general (706 messagesđŸ”„đŸ”„đŸ”„):

Cursor 0.44.4 Release, O1 vs Sonnet 3.5 Performance, Website Builders vs Custom Code, Gemini-1206 Capabilities, The Role of College for Startups

  • Cursor 0.44.4 Released: The channel discussed the recent release of Cursor version 0.44.4, detailing several new features and improvements including agent enhancements and Yolo mode.
    • Users reported better performance with the agent in 0.44.4, noting its ability to run commands and handle tasks more efficiently.
  • Discussion on O1 and Sonnet 3.5: The conversation centered around O1, which is priced at approximately 40 cents per request, and its value compared to Sonnet 3.5, with users sharing different opinions on their effectiveness.
    • Some users found Sonnet 3.5 to be sufficient for their needs, expressing skepticism about whether O1 justifies its cost.
  • Opinions on Website Development Tools: A debate arose about the use of website builders like Framer versus coding from scratch, highlighting the trade-offs between time savings and cost.
    • While some appreciated the efficiency of website builders, others felt that custom coding offered more flexibility and control.
  • Capabilities of Gemini-1206: Inquiries were made about users’ experiences with Gemini-1206, with some expressing interest in its features and potential benefits.
    • However, others remained focused on the performance of established models like Sonnet 3.5 for coding tasks.
  • The Importance of College for Startups: The discussion touched on the value of college education, particularly Ivy League schools, versus pursuing startup ventures.
    • Participants debated the necessity of formal education in a startup context, weighing it against practical experience and success.

Links mentioned:


Codeium (Windsurf) ▷ #discussion (65 messagesđŸ”„đŸ”„):

Flex credits rollover, Using repoprompt in Windows, Integrating features from GitHub, Codeium extension issues, Windsurf user experience

  • Flex credits rollover clarified: A member confirmed that flex credits roll over, ensuring users retain their credits after payment.
    • This was corroborated by another who mentioned that their usage was reset upon making the payment.
  • Seeking repoprompt equivalent for Windows: A user asked for an equivalent to repoprompt on Windows, showing interest in similar functionalities in their environment.
    • While no direct alternatives were provided, members encouraged exploring options and testing different setups.
  • Integrating GitHub features using ChatGPT: A member expressed challenges in using ChatGPT to integrate a feature from one GitHub branch to another and inquired about available guides.
    • Suggestions included looking for specific YouTube channels and guides to facilitate the integration process.
  • Codeium extension issues in VSCode: Users reported problems with the Codeium extension not supporting autocomplete in Jupyter notebooks within VSCode, unlike before.
    • One member also mentioned downgrading the extension to fix a server disconnection issue.
  • Frustrations with Windsurf user experience: A user expressed frustration with Windsurf’s handling of large files, specifically mentioning the consistent deletion of the same code lines.
    • They felt that the cascade feature needs improvement, but showed reluctance to submit a bug ticket through the official support channels.

Codeium (Windsurf) ▷ #windsurf (509 messagesđŸ”„đŸ”„đŸ”„):

Windsurf performance issues, Cline + Gemini usage, Codeium support and features, Model comparisons, Credit management in AI tools

  • Windsurf performance issues after updates: Several users reported that Windsurf has become less functional in recent days, with issues such as editing files and frequent errors during use.
    • Users are increasingly frustrated with the software’s reliability, prompting some to consider alternative tools like Cursor.
  • Positive feedback on Cline + Gemini usage: Some users mentioned that using Cline with Gemini 2.0 leads to better coding results compared to Windsurf, with smoother functionality.
    • Users appreciated the efficiency of Cline, especially for tasks like refactoring and handling larger code without issues.
  • Inquiring about Codeium support and improvements: Users expressed a desire for more responsive support from Codeium and reported the lack of recent updates or fixes for existing issues.
    • The community is keen on seeing improvements and features that align with current user needs for better functionality.
  • Comparisons of AI models and their effectiveness: Discussions around the differences in performance between various models, including Claude Sonnet and Gemini, highlighted varying efficiencies for specific tasks.
    • Users noted the need for contextual information and proper documentation to enhance AI model utility.
  • Concerns regarding credit management and costs: Concerns were raised about the consumption of credits in Windsurf and how it affects user experiences, especially with large tasks.
    • Users are evaluating the cost-effectiveness of different plans and the implications of credit usage with their AI tools.

Links mentioned:


Interconnects (Nathan Lambert) ▷ #news (193 messagesđŸ”„đŸ”„):

Gemini 2.0 Flash Thinking, OpenAI updates, Researcher departures, Search engine competition, Reasoning models

  • Gemini 2.0 Flash Thinking launched: Google introduced Gemini 2.0 Flash Thinking, an experimental model designed to explicitly show its thoughts while reasoning, promising improved performance.
    • This model aims to combine speed and enhanced reasoning capabilities, potentially positioning Google strongly in the AI chatbot landscape.
  • OpenAI introduces new chat features: OpenAI announced ‘Work with Apps’ support in voice mode, allowing integration with apps like Notion and Apple Notes, as highlighted on their 12 Days of ChatGPT site.
    • This marks another step for OpenAI to enhance user interaction and functionality in their systems.
  • Significant departures at OpenAI: Notable researcher @AlecRad has left OpenAI, recognized as a key figure in the development of models like GPT, Whisper, and DALL-E.
    • Concerns were raised regarding the future leadership and direction of OpenAI following this departure.
  • Competitive landscape in search engines: @amir reported that Google is integrating its Gemini chatbot directly into search results, marking a strategic shift towards conversational AI modes in search.
    • This raises questions about how competing services, such as Kagi, attract users seeking less commercialized search experiences.
  • Discussions on reasoning in AI models: Participants debated the effectiveness of reasoning models, emphasizing that self-correction may not be necessary if models can output reasoning effectively without errors.
    • This highlights the ongoing exploration of how AI achieves reasoning and its distinction from traditional search methods.

Links mentioned:


Interconnects (Nathan Lambert) ▷ #ml-drama (16 messagesđŸ”„):

o1 model discussion, Chollet's analogies, Subbarao/Miles Brundage incident, Francois Chollet's grumpiness, Interconnects engagement

  • Chollet’s take on o1 as an LLM: François Chollet claimed that labeling o1 as ‘an LLM’ is akin to calling AlphaGo ‘a convnet’, igniting debate among members.
    • While some challenged Chollet, referencing AlphaGo’s reliance on MCTS and neural networks, many expressed confusion over o1’s operating principles.
  • Francois Chollet’s Grumpy Reputation: Members humorously noted Chollet’s grumpy demeanor when discussing the o1 model and its comparisons to established models.
    • Comments highlighted a desire for better clarity on o1, with suggestions that someone from OpenAI should explain its functionality to Chollet.
  • Subbarao/Miles Brundage Incident Recalled: The discussion brought up the Subbarao/Miles Brundage incident, emphasizing the viewpoint that o1 operates primarily as a language model.
    • A member referenced this incident, suggesting it reflects broader misunderstandings in the community about model deployments.
  • Call for Meme on ‘Oneshotting Turbonormies’: A member expressed the need for a meme related to ‘oneshotting turbonormies’, indicating the ongoing meme culture within the discussions.
    • Frustration was expressed over not being able to find the meme quickly when needed.
  • Engagement with Interconnects: Members discussed the value of reading content from Interconnects, with suggestions to reply to Chollet with links to relevant discussions.
    • The conversation highlighted a humorous take on keeping up with the fast-paced debates within the community.

Links mentioned:

  • Tweet from roon (@tszzl): @rao2z @Miles_Brundage but it’s not really a scientific question how a deployed product works or how the model is inferenced. o1 is just a language model
  • Tweet from François Chollet (@fchollet): Calling something like o1 "an LLM" is about as accurate as calling AlphaGo "a convnet"
  • Tweet from François Chollet (@fchollet): For those who didn't get it -- AlphaGo was a MCTS search process that made thousands of calls to two separate convnets in order to compute a single game move.Something like o1 pro is also, best we...

Interconnects (Nathan Lambert) ▷ #random (167 messagesđŸ”„đŸ”„):

Stripe Tax Implementation, Substack Revenue Model and Tax Concerns, CPA Recommendations for Tax Filing, Digital Services and VAT Compliance, Challenges for International Taxation

  • Stripe Tax as a Safety Net: The discussion emphasized the importance of enabling Stripe Tax for digital services to simplify tax compliance, especially for Substack creators approaching revenue thresholds.
    • Turning this feature on can avoid potential headaches down the line with taxation authorities.
  • Confusion Around Substack’s Tax Handling: Participants were uncertain about how Substack handles taxes, with discussions about whether Substack is considered the marketplace operator responsible for tax collection.
    • Nate pointed out that as payments go directly to Substack’s Stripe account, it complicates the tax situation for creators.
  • Learning from Bigger Substackers: Nate noted that even larger Substackers seem to lack knowledge about tax obligations, indicating a possible trend among creators in this space.
    • This raised the point about the broader issues of accountability and responsibility in reporting earnings and taxes.
  • CPA and Tax Advice: Several members suggested reaching out to a CPA for guidance on navigating tax requirements, particularly for digital service businesses.
    • Nate mentioned that his partner’s mother is a CPA and expressed interest in gathering more recommendations to ensure proper compliance.
  • International Tax Challenges: There was discussion around the challenges of managing VAT in Europe and how individuals or businesses might navigate potential tax liabilities in international contexts.
    • One member humorously noted that failing to comply could lead to severe consequences, indicating the seriousness of these tax issues.

Links mentioned:


Interconnects (Nathan Lambert) ▷ #memes (2 messages):

Interactive AI in Game Shows, Social Media Reactions

  • ChatGPT on Game Shows: A Hilarious Twist: A member joked about calling 1-800-ChatGPT during a game of ‘Who Wants to Be a Millionaire’, showcasing the growing influence of AI in pop culture.
    • This humorous reference reflects the ongoing integration of AI into everyday scenarios and entertainment.
  • Viral AI Tweets: A tweet by voooooogel gained attention, though its specifics remain undefined, hinting at AI-related content that sparked discussions.
    • This kind of interaction underlines the curiosity and engagement surrounding AI topics on social media platforms.

Links mentioned:


Interconnects (Nathan Lambert) ▷ #rl (1 messages):

kevin_nejad: it’s interesting (but not obvious) such behaviour emerges purely from RL training


Interconnects (Nathan Lambert) ▷ #lectures-and-projects (7 messages):

RLHF Book, Typos Correction, Fundamentals Review

  • Stress Relief through RLHF Review: A member expressed feeling less stressed about work while spending time on the RLHF Book and a long lecture on YouTube before the break.
    • They found it cathartic to review the fundamentals of RLHF, highlighting its importance.
  • Calling for Typos Corrections: A proposal was made to send free copies of the RLHF book to individuals who help fix typos or formatting problems.
    • Another member eagerly volunteered to contribute, stating they are well-suited due to their command of the English language.
  • Community Involvement in RLHF: An RLHF novice expressed interest in helping with corrections for the book, mentioning that they appreciate free items.
    • The community seems eager to engage, as members recognize the collaborative opportunity.
  • RLHF Resources on GitHub: A member noted that the resources for the RLHF Book are all available on GitHub, making it easy to access.
    • This accessibility facilitates collaboration and contributions from the community.

Interconnects (Nathan Lambert) ▷ #posts (1 messages):

natolambert: Yeah. Students coming up and wanting to take photos is so cute ❀


OpenAI ▷ #annnouncements (1 messages):

12 Days of OpenAI, ChatGPT work enhancements

  • Kickoff for Day 11 on ChatGPT: Day 11 of the 12 Days of OpenAI introduces a new way to work with ChatGPT, detailed in a YouTube live session.
    • Stay in the loop during these days by picking up the <@&1261377106890199132> role in id:customize.
  • Get Involved with OpenAI Updates: Participants are encouraged to engage with the 12 Days of OpenAI by staying informed on the latest developments and opportunities.
    • This initiative allows members to enhance their experience with ChatGPT and related tools.

Link mentioned: - YouTube: no description found


OpenAI ▷ #ai-discussions (310 messagesđŸ”„đŸ”„):

ChatGPT and integration, Google's AI developments, YouTube clone project, Software engineering automation, AI benchmarks and capabilities

  • ChatGPT adds integrations with XCode: Users discussed how ChatGPT can facilitate code development by allowing users to copy and paste text directly into XCode, enhancing the workflow.
    • While this feature offers convenience, it still requires manual input from users for tasks like initiating the copy action.
  • Google releases experimental AI models: Chat participants noted the recent release of Google’s Gemini 2.0 Flash Thinking experimental model, highlighting its performance and the public’s interest.
    • There was skepticism surrounding the model’s accuracy, particularly in simple tasks like counting letters in words.
  • Creating a YouTube clone using ChatGPT: Members were enthusiastic about the prospect of using ChatGPT to create a YouTube clone, discussing the model’s ability to handle front-end and back-end coding.
    • The challenge lies in the more involved terminal operations required for back-end construction, which was acknowledged as a complexity in the process.
  • Future of software engineering with AI: Participants speculated on how advancements in AI could potentially automate entire software engineering tasks, impacting the need for human engineers.
    • Automation was seen as both exciting and concerning, depending on how complex the tasks remain despite AI capabilities.
  • AI performance benchmarks: The community raised questions about AI benchmarks and model performance, particularly regarding Google’s new offerings versus existing ones.
    • Participants expressed interest in the model’s capabilities but also skepticism, emphasizing the ongoing discourse on LLM efficiency.

Links mentioned:


OpenAI ▷ #gpt-4-discussions (8 messagesđŸ”„):

Editing GPTs, Project Folder Limitations, Support Channels, Pro Package Tool Issues

  • Editing GPTs remains a puzzle: There was confusion regarding the capability to edit GPTs, with one user insisting they can edit while another reported being unable to.
    • jerzjorge23 stated that they could only create new GPTs since the recent project release.
  • Limitations of the Project Folder: 7_vit_7 mentioned that moving GPTs into the projects folder isn’t possible due to potential attached files causing conflicts.
    • jerzjorge23 clarified they were not attempting to move files but simply wanted to edit them.
  • Seeking support channels: armantlrp inquired about potential support channels available for assistance regarding tool usability.
    • They noted that multiple tools, including canvas, search, and picture, were unusable on both web and MacOS versions.
  • Pro package tool issues persist: armantlrp has been experiencing issues with their Pro package tools being unavailable for several days.
    • This raised concerns within the community about possible ongoing issues affecting features for Pro package users.

Eleuther ▷ #general (146 messagesđŸ”„đŸ”„):

FSDP and Tensor Parallelism, EleutherAI Token Controversy, Natural Attention Optimizer, Debugging Training Models, Causal Masking in Attention

  • FSDP and Tensor Parallelism Debate: Members discussed the differences between Fully Sharded Data Parallel (FSDP) and Tensor Parallelism, noting that FSDP shards parameters while maintaining operations across GPUs.
    • Some expressed skepticism about the efficiency of FSDP due to increased communication overhead compared to direct tensor parallel implementations.
  • Debunking EleutherAI Token Myths: EleutherAI does not have any affiliated cryptocurrency, and members warned others about scams related to unofficial tokens that have appeared recently.
    • The community emphasized that investing in such tokens is akin to participating in a Ponzi scheme.
  • Introduction of Natural Attention Optimizer: A member shared insights on a new Attention Informed Optimizer that adapts the Adam optimization algorithm using gradients from attention mechanisms.
    • The optimizer is claimed to improve performance significantly, although the funnelling of results raised flags regarding potential bugs in the implementation.
  • Challenges in Model Training Debugging: Participants discussed troubleshooting issues in model training, particularly focusing on an unusually low loss value in one participant’s results.
    • Recommendations included double-checking causal masking functions, as incorrect implementations could lead to misleading training metrics.
  • Importance of Causal Masks in Attention: Members highlighted the necessity of causal masks in attention mechanisms to prevent future tokens from influencing current predictions.
    • It was suggested that overlooking this component could result in extreme discrepancies in model performance and outputs.

Links mentioned:


Eleuther ▷ #research (123 messagesđŸ”„đŸ”„):

Microsoft Research Ethics, Koopman Theory and Neural Networks, Diffusion vs Autoregressive Models, Plagiarism Concerns in ML Research, Research Submissions and Oversight

  • Concern Over Microsoft Research Ethics: Discussions highlighted issues with ethical practices at Microsoft Research (MSR), including recent plagiarism accusations against their papers for copying work without citation.
    • Previous controversies, such as the Phi methodology, and instances of low integrity were noted, raising questions about the overall ethical culture at MSR.
  • Debate on Koopman Theory Application: Members debated the validity of using Koopman theory in the context of neural networks, with some arguing that the application seems forced and doesn’t yield clear benefits.
    • Concerns were raised about the underlying theoretical justification for such approaches, suggesting that they could unintentionally mislead researchers.
  • Diffusion vs Autoregressive Models: A discussion emerged on the pros and cons of diffusion models compared to autoregressive methods across various modalities, particularly their efficiency and suitability for discrete datasets.
    • While diffusion models currently dominate in image generation, there is speculation about their long-term viability compared to autoregressive techniques in other tasks.
  • Plagiarism in Machine Learning Research: Several members expressed concerns about apparent plagiarism in recent machine learning research papers, particularly those from high-profile institutions like MSR.
    • Calls for accountability and transparency in research practices were made, emphasizing the need for public pushback against unethical conduct.
  • Research Submissions and Oversight: Discussions about the differing oversight structures in research organizations raised questions about the implications for research integrity and the handling of controversies.
    • Members noted how decentralized oversight at MSR may contribute to ethical lapses, contrasting it with more centralized approaches observed in other organizations.

Links mentioned:


Eleuther ▷ #interpretability-general (5 messages):

Independence of Neural Network Activations, Pre-image Reconstruction Methods, Steered vs Unsteered Sparse Autoencoders, Out-of-Distribution (OOD) Evaluation

  • Investigating Independence of Activations: A user inquired about the independence of neural network activations within the same layer, expressing challenges in finding relevant analyses. It was noted that higher model nonlinearity tends to decrease independence in intermediate layers.
  • Challenges in Pre-image Reconstruction: The user detailed experiments with pre-image reconstruction for a CNN using MNIST, finding that edits to one activation affected others. When comparing two pre-image methods, the correlation in activation changes suggested a degree of dependence among activations.
  • Insights on Sparse Autoencoder Features: The user applied similar experiments to sparse autoencoder features, observing a lack of independence between features. This reinforces the notion that activations in neural networks may not behave as independently as traditionally assumed.
  • Measuring Out-of-Distribution (OOD) for SAE Reconstructions: Another user sought best practices for assessing the degrees of OOD in steered sparse autoencoders. They inquired whether cosine similarity between centroids of steered vs unsteered activations could be a viable measurement strategy.

Perplexity AI ▷ #general (254 messagesđŸ”„đŸ”„):

Perplexity AI updates, You.com features, Gemini models, Student discounts, Referral systems

  • Perplexity AI Referral System Confirmed: A user confirmed that Perplexity does have a referral system which can benefit those signing up through connections.
    • Another user is enthusiastic about getting more people onboard, stating their whole fraternity might join.
  • You.com Performance vs Models: Concerns were raised about the quality of responses from You.com, suggesting that answers may not match the performance of direct models due to the search interface.
    • Users discussed the value of the actual models being utilized, rather than only mimicking responses through system instructions.
  • Students Can Access Free Pro with .edu Emails: Reports surfaced about students obtaining free Pro access by signing in with .edu emails, although some users noted issues with the process.
  • Anticipation for New Superman Movie: Details about a new Superman movie teaser were shared, prompting mixed reactions and excitement among users.
    • The random announcement was described as surprising, indicating user engagement beyond AI discussions.
  • Challenges in Translating Game Descriptions: A user faced difficulties getting Perplexity AI to translate an entire list of game descriptions into French, with the AI only processing a few before stopping.
    • Assistance was sought in managing the limits placed by the AI on handling large data sets.

Links mentioned:


Perplexity AI ▷ #sharing (7 messages):

EU Funds Starlink Rival, Plants Cry, Law of the Few, Magic Spell Hypothesis, Tornado Alley

  • EU Funds Starlink Rival: A YouTube video discusses how the EU is funding a rival to Starlink, exploring its potential impacts on global internet access.
    • The video also covers the implications of this initiative on connectivity and competition in satellite internet services.
  • Plants Exhibit Crying Behavior: The topic of why plants cry surfaced, discussing the recent findings on this fascinating phenomenon and its implications for plant biology.
    • Readers engaged with sources noting that plant responses can resemble emotional states and reflect their environmental stress levels.
  • Understanding the Law of the Few: The Law of the Few was mentioned, suggesting that a small number of people can influence a larger crowd, as discussed by researchers.
    • Links illustrate how this social principle applies to technology and marketing strategies, enhancing viral growth potential.
  • Magic Spell Hypothesis Review: A document discusses the Magic Spell Hypothesis, outlining its key arguments and relevance to ongoing scientific debates.
    • The link provides insights into its theoretical applications and critiques from the academic community.
  • Exploring Tornado Alley: A recent inquiry into Tornado Alley looked at geographical and meteorological data that define this region’s tornado occurrences.
    • The discussion highlights safety measures and preparedness strategies for those living in vulnerable areas, as shared in a valuable resource.

Link mentioned: YouTube: no description found


aider (Paul Gauthier) ▷ #general (222 messagesđŸ”„đŸ”„):

Gemini models, Aider integration, MCP functionality, OpenAI access issues, Jira task automation

  • Gemini 2.0 Flash Thinking Launch: The new model, gemini-2.0-flash-thinking-exp-1219, was introduced, demonstrating potential for improved reasoning and response quality, particularly in agentic workflows.
    • Initial tests indicated fast performance and a higher quality output compared to existing models like O1 and deepseek.
  • Aider and MCP Integration: Users discussed setting up the MCP functionality with Aider, successfully integrating it for tasks like creating and managing Jira updates.
    • Some users noted that while Sonnet has been commonly used, there is potential for using other models in MCP setups.
  • OpenAI Access from EC2: A user inquired about accessing OpenAI services from an EC2 server, confirming smooth operation and no issues reported.
    • The original concern was clarified, suggesting it was related to an individual setting rather than a widespread problem.
  • Model Preference in Task Automation: Users identified preferences for the weak model in handling specific tasks like commit messages and summarization within the workflow.
    • Discussions highlighted the versatility of combining different models for optimal performance in task management.
  • Testing Other Models: There are inquiries about the capabilities of various models like Qwen and their performance in coding tasks as well as debugging.
    • Users expressed interest in experimenting with these models for better integration into their workflow automation.

Links mentioned:


aider (Paul Gauthier) ▷ #questions-and-tips (11 messagesđŸ”„):

Using multiple OpenAPI services, Gemini Flash 2.0 issues, Architect mode features, Adding files in a fuzzy way, Project planning models

  • Combining OpenAPI Services Efficiently: A user initially sought guidance on how to use two different OpenAPI services, specifically QwQ on Hugging Face and local Ollama.
    • They later realized that Hugging Face has its own API and that the method needs to be dictated in the model name.
  • Gemini Flash 2.0 Modifications: A member reported ongoing issues with Gemini Flash 2.0, noting it typically modifies the wrong instance, commonly the first.
    • Another member suggested employing the AI comments feature as a workaround.
  • Does —watch-files Work with Architect Mode?: Inquiries arose regarding whether the —watch-files option is compatible with architect mode.
    • A response indicated that adjustments would be prompted when using the option correctly.
  • Fuzzy File Addition in Chat: A user asked about a method for adding files in a fuzzy manner without specifying the full path each time, sharing an example output.
    • They discovered that committing files is necessary for the Aider to auto-suggest them on the /add command.
  • Recommended Hardware for Aider Client: A question emerged regarding suitable hardware for running the Aider client-side, with reports of the LLM finishing while the client delays response assembly.
    • Another member responded that such delays should not be occurring, indicating potential issues with the setup.

GitHub Copilot Chat, Aider Composer VSCode Extension, Diff Edits Preference

  • GitHub Copilot Chat Immersive Mode Launched: GitHub announced enhanced features for Copilot Chat, including an immersive chat experience and smarter, faster responses tailored to user needs.
    • Real-time interactions with your codebase are now supported, allowing for immediate answers to coding questions and facilitating effortless code generation.
  • Aider Composer Extension Review: A review of the Aider Composer VSCode Extension highlighted the new diff accept view, which replaces the previous git diff view but noted it doesn’t commit to git, limiting undo capabilities.
    • The main advantage of this extension is its use of the installed version of Aider, enhancing user control over the coding process.
  • Improvement in GitHub Copilot since Launch: Members discussed how GitHub Copilot has progressed since its initial free release, now offering features like Claude Sonnet integration and a multi-file edit function.
    • Despite this, one user critiqued that the free tier still offers limited access, leading to concerns over cost-effectiveness versus traditional diff edits.
  • Preference for Diff Edits: In comparing GitHub Copilot’s features, users showed a preference for traditional diff edits, finding them more effective and economically viable.
    • One member expressed satisfaction with Copilot’s improvements, yet still advocated for the enduring utility of diff edits in coding workflows.

Link mentioned: Announcing GitHub Copilot Free · GitHub Changelog: Announcing GitHub Copilot Free


Stackblitz (Bolt.new) ▷ #announcements (1 messages):

Bolt Supabase Integration

  • Bolt & Supabase Integration Goes Live: The Bolt<>Supabase integration is officially live and available for everyone, simplifying the process significantly.
    • No manual setup: just click, connect, and it’s done, making it easier for users to get started.
  • Effortless Connection Transition: Users can now effortlessly integrate their applications with Bolt by connecting to Supabase with a single step.
    • This integration aims to streamline developer workflows and eliminate complex setup processes.

Link mentioned: Tweet from StackBlitz (@stackblitz): 📱 Announcing: Supabase<>Bolt integration!No manual setup: just click, connect, and it’s done!


Stackblitz (Bolt.new) ▷ #prompting (14 messagesđŸ”„):

Bolt project setup, Issues with .env file, Direct uploads from Figma, Application review process

  • Issues arise with .env file resetting: Users reported problems with the .env file resetting, causing errors with their Firebase setup. One member noted that locking the .env file can help prevent changes during the session, but encountered issues with it being overridden after refreshing.
    • This project exceeds the total supported prompt size was cited as a common problem that users are facing due to this issue.
  • Direct uploads from Figma not possible yet: A user inquired about the possibility of uploading Figma files for Bolt to generate code, but it was confirmed that direct uploads are not currently supported. The suggested method is to take screenshots as a workaround.
    • This method has been requested, indicating a demand for improved integration with design tools like Figma.
  • Finding redundancies in Bolt applications: A user questioned if there’s a way for Bolt to review applications for redundancies and clean them up efficiently. The sentiment suggests that the current process may consume unnecessary tokens without providing an effective cleanup solution.
  • Creating a public folder for projects: Instructions were shared about creating a public folder for projects and adding images to it for use with Bolt. Users expressed confusion on how to implement these steps effectively and where to locate this folder.
    • Clarifications on the folder setup indicate that users still seek clearer guidance on project structure.

Stackblitz (Bolt.new) ▷ #discussions (182 messagesđŸ”„đŸ”„):

Bolt Issues and Feedback, Community Support and Resources, Supabase Integration, Functionality and Token Use, User Experience with Bolt

  • Users Encountering Downtime and Session Issues: Multiple users reported difficulty logging into Bolt and issues with session timeouts that required refreshing the page, resulting in lost chat history.
    • While the team is aware and working to resolve the authentication issues, many expressed frustration over the impact on their projects and token use.
  • Community Collaboration on Projects: Members discussed creating a helpful guide for Bolt, leveraging community contributions and focusing on supporting each other through project development.
    • The collaboration included plans for user dashboards to upload and approve guides, indicating a proactive community effort.
  • Integration of Supabase and Future Features: Discussions highlighted the integration of Supabase with Bolt, emphasizing its importance alongside future plans for Stripe integration and improved token management.
    • Users were keen on understanding how to best utilize Supabase within existing projects and the functionality of the different modes available.
  • Feedback on Product Functionality and Token Consumption: A number of users expressed frustration regarding token consumption when building applications, suggesting that redundant outputs often lead to excessive token use.
    • Suggestions were made for improving the review process for application outputs to manage redundancy and optimize token use.
  • Exploration of Tech Stack and Development Challenges: Members discussed recommended tech stacks for mobile app development, particularly focusing on compatibility with Supabase and overall functionality with Bolt.
    • Some users experienced challenges in successfully building projects that met their expectations, leading to questions about effective utilization of Bolt.

Links mentioned:


Notebook LM Discord ▷ #announcements (1 messages):

Interactive Mode for Audio Overviews

  • Interactive Mode for Audio Overviews now live for all!: The team has successfully rolled out improvements to Interactive Mode for Audio Overviews to 100% of users.
    • Users are encouraged to try it out or revisit it if they previously found it unresponsive.
  • Exciting Audio Feature Rollout: Many NotebookLM engineers have worked hard to enhance the performance of the audio feature overviews.
    • This update aims to provide a smoother experience for all users.

Notebook LM Discord ▷ #use-cases (17 messagesđŸ”„):

NotebookML video generation, Interactive podcast feature, Podcast editing workflows, Connection of MySQL database to NotebookLM, YouTube content creation

  • AI-generated video explores isolation in space: A user shared an AI-generated video vlog capturing the experiences of an astronaut isolated in space for a year, showcasing the toll of loneliness and creativity through this YouTube link.
    • It’s a gripping portrayal of a mind unraveling, described another user about the video.
  • Podcast interactions not saved: A user clarified that the interactive podcast feature does not save interactions as part of the podcast, making it necessary to record both separately for external listeners.
    • This raised questions about the workflow, prompting a user to seek clarification on the podcast creation process.
  • YouTube channel showcases animated videos: Another user pointed out the prolific content creation of a member who has been uploading varied videos almost daily, including animated ones made with NotebookML outputs, accessible here.
    • Viewer feedback expressed appreciation for the content, noting the creative demands behind such frequent uploads.
  • Need help connecting MySQL to NotebookLM: A game master sought assistance on how to connect their extensive MySQL database to NotebookLM for automating NPC reactions in their long-running RPG sessions.
    • They highlighted their experience of running games for over 10 years with a large player base, indicating the complexity involved.
  • Podcast style prompt for efficiency: One user shared a prompt designed to make podcast dialogue more succinct and blunt, focusing on a review of a QWQ model related to video acceleration calculations.
    • The provided audio extract was aimed at enhancing the podcast’s delivery style by encouraging fast-paced, no-nonsense dialogue.

Links mentioned:


Notebook LM Discord ▷ #general (144 messagesđŸ”„đŸ”„):

Notebook LM Interactive Mode, Audio Overview Pronunciation, Notebook Features Across Notebooks, User Feedback on New UI, Experimental Use of AI in Storytelling

  • Users report on Notebook LM Interactive Mode rollout: Many users are discussing their experiences with the new Interactive Mode, noting that while some have access, others are still waiting for the feature to be fully rolled out.
    • Despite some initial challenges, users are excited about the creative possibilities this mode provides.
  • Pronunciation issues in Audio Overviews: A user reported that Notebook LM incorrectly pronounced the name Shane Gostisbehere, repeatedly confusing it with a different name, highlighting pronunciation challenges.
    • The development team is actively investigating the issue, and users are encouraged to provide audio samples for better understanding.
  • Questions about using features across multiple notebooks: A user inquired if features and content can be shared across multiple notebooks created for different modules.
    • It was confirmed that users must upload all sources to the same notebook, as cross-notebook functionality is currently not available.
  • Positive Feedback on New User Interface: Several users expressed appreciation for the recently updated Notebook LM UI, finding it highly workable and user-friendly.
    • The team is receiving positive reinforcement, with users eager to explore the new features and capabilities.
  • Creative AI Use in Storytelling: A user shared their excitement about using AI for storytelling, detailing an experiment in generating characters for a TTRPG set in a cyberpunk future.
    • They highlight how Notebook LM managed to adapt to various narrative challenges while staying true to the story’s source material.

Links mentioned:


Stability.ai (Stable Diffusion) ▷ #general-chat (102 messagesđŸ”„đŸ”„):

Running SDXL on Ubuntu, ComfyUI Issues, AI Image and Video Quality, Quantum Computing Conversations, Civitai Website Issues

  • Recommendations for Running SDXL on Ubuntu: Several members discussed tips for running SDXL on Ubuntu, with suggestions ranging from using Forge UI to utilizing shell launch files for easier setup.
    • Nuuideas pointed out that the lack of knowledge about the system might be hindering ComfyUI performance.
  • ComfyUI’s Persistent Problems: There were complaints about ComfyUI having annoying errors and producing burnt images when using certain sampling methods, despite attempts to troubleshoot.
    • Nuuideas recommended using Euler sampling and keeping denoising settings optimal for better results.
  • Expectations vs. Reality for AI Images and Video: Discussion on whether AI-generated images and video have reached perfection, with earnstar asserting they won’t be perfect even by 2030 due to numerous challenges.
    • Eyaura disagreed, claiming rapid advancements in AI technology could yield improvements sooner.
  • Debate on Quantum Computing’s Future: Conversations revolved around quantum computing, particularly concerning the implications of proving problems like P=NP, with Nuuideas expressing concerns over the practicality of quantum algorithms.
    • Earnstar highlighted the challenges of extracting useful results from quantum states.
  • Civitai Website Functionality: Wallykz reported issues accessing civitai.com, with other members confirming outages and noting the site tends to be offline frequently.
    • Crystalwizard mentioned that the website often has server issues that disrupt accessibility.

Links mentioned:


GPU MODE ▷ #general (58 messagesđŸ”„đŸ”„):

Coil Whine, GPU Performance & Choices, Bottlenecking Debate, VRChat VRAM Needs, Next-Gen GPU Pricing

  • Coil Whine Strikes Again: A user expressed concern about absurd coil whine from a returned RX 6750XT, leading to a bad experience.
    • Another member humorously suggested that the coil whine might be loud enough to play music.
  • Deciding On GPU Choices: Discussion revolved around choosing a budget-friendly GPU, with suggestions like the 7900 XTX being mentioned as a good option compared to NVIDIA cards.
    • The consensus leaned towards waiting for next-gen GPUs due to anticipated high prices.
  • Bottlenecking Argument Sparks Debate: A user argued that bottlenecking does not exist, while others pointed out that a weaker CPU can delay frame delivery to the GPU.
    • The debate highlighted varying opinions on the influence of CPU performance on overall FPS.
  • VRChat’s RAM Hunger: VRChat’s VRAM needs were brought up, with implications that it could rapidly consume available memory leading to performance issues.
    • Users noted that many gamers opt for 4090s due to these demands.
  • Next-Gen GPU Fears: Concerns were raised that future RTX 50s could come at an outrageously high price compared to current offerings.
    • Despite the worries, AMD’s promise to deliver competitive performance at a lower cost created cautious optimism.

GPU MODE ▷ #triton (4 messages):

tl.dot input shape requirements, AMD GPU performance vs PyTorch, Nvidia Hopper warp-specialization deletion, Triton performance optimization

  • tl.dot needs >= 16 for Input Shapes: Input shapes for tl.dot should have M >= 16, N >= 16, and K >= 16, primarily due to tensor core requirements.
    • A user queried whether tl.dot can default to using CUDA cores for computations when M, N, or K are less than 16.
  • Searching for faster AMD GPU kernels: A user asked if anyone has found a kernel that performs faster on AMD GPUs like the RX 7900 compared to PyTorch/rocBLAS.
    • Another user noted that as of now, Triton performance has not yet surpassed their BLAS implementations, particularly for Navi31.
  • Nvidia Hopper’s warp-specialization feature removed: A user discovered that the warp-specialization feature of Nvidia Hopper has been removed in Triton 3.x.
    • They inquired about possible techniques to achieve better performance with Triton following this change.

GPU MODE ▷ #cuda (5 messages):

cudaMemcpy performance, CUTLASS tma_store_wait function behavior, Documentation on TMA operations

  • Exploring Faster Alternatives to cudaMemcpy: A member inquired if there are faster methods to copy small data sizes (e.g., 12 bytes) to device memory than using cudaMemcpy, which reportedly takes about 1-2us.
    • This raises questions about potential optimizations for memory transfers in CUDA programming.
  • tma_store_wait may complete automatically: A member observed that after executing a TMA-store operation using tma_store_wait in CUTLASS, waiting might not be necessary as it seems to complete automatically.
    • This suggests that its behavior resembles expect_tx, prompting discussions on its efficiency in handling operations.
  • Need for Documentation Confirmation: In response to the discussion on TMA operations, a member asked for documentation that clarifies whether the functionality is supported as initially believed.
    • The request emphasizes the importance of accurate and accessible documentation for development practices.

GPU MODE ▷ #torch (1 messages):

0x000ff4: any one contributin to keras/pytorch?


GPU MODE ▷ #algorithms (21 messagesđŸ”„):

Genesis AI, Sim2Real Technology, CARLA Simulator Update, Synthetic Data Generation for Autonomous Driving, Dexterous Task Applications

  • Genesis AI Sparks Interest: The community expressed excitement about Genesis and its potential applications, particularly highlighting the impressive water droplet demo.
    • One member remarked, ‘Super cool thing,’ showcasing the appeal of new tools in AI.
  • Exploring Sim2Real Concepts: Discussion pivoted to Sim2Real, focusing on its capability to transfer skills from simulation to real-world applications, with emphasis on tasks like cooking and assembling.
    • One user questioned, ‘I wonder how it does on dexterous tasks,’ indicating interest in its practical functionality.
  • CARLA Simulator Receives Major Upgrade: The team celebrated the release of CARLA version 0.10.0, which enhanced visual fidelity through a migration to Unreal Engine 5.5 and introduced advanced features like Lumen and Nanite.
    • This update includes upgraded environments and assets, showcasing advancements in rendering technology.
  • Synthetic Data Generation Discussions Emerge: There was speculation regarding Waymo’s approach to data, with a member noting that Waymo might also generate synthetic data alongside real driving data.
    • Links to relevant articles were shared, including this research on embedding synthetic data for autonomous driving.
  • Future of Synthetic Data in Autonomous Driving: Members elaborated on how future advancements might see the majority of simulated data being synthetic, possibly integrating tools like Genesis.
    • The conversation concluded with curiosity about how well such frameworks scale, noting their potential for generating accurate vehicle dynamics.

Links mentioned:


Image Analysis, User Concerns

  • Discussion sparked over image analysis: A member referenced an image in the channel which triggered discussions about its content and relevance.
    • While no specific details were quoted about the image, the engagement indicates it caught the group’s attention.
  • Humorous engagement with image: The member’s response included a light-hearted remark indicating amusement with the contents of the image shared.
    • This humor suggests a lively atmosphere in the chat, contributing to the overall enjoyment of the discussion.

GPU MODE ▷ #jobs (1 messages):

MatX hiring, LLM accelerator ASIC development, Low level compute kernel author roles, ML performance engineer roles, In-person work culture

  • Join MatX as they build LLM accelerator ASIC: MatX is hiring for positions including low level compute kernel author, compiler, and ML performance engineer roles. Interested candidates can find more information in their job listing here.
    • They value efficiency and high-quality solutions, open to applicants from fresh graduates to seasoned professionals.
  • MatX encourages innovative problem-solving: The team emphasizes the need to consider new approaches, often abandoning traditional methods for better alternatives that suit their context.
    • They prioritize making big decisions with deep understanding, indicating that thorough reasoning often outweighs extensive testing.
  • Emphasis on high-trust team environment: MatX promotes a culture rooted in inviting and including diverse perspectives within their high-trust team.
    • Supportive teamwork is essential, as they believe it is crucial for tackling complex challenges collectively.

Link mentioned: MatX: <header><h2>MatX: faster chips for LLMs</h2></header><div id=“maincontent”><h3>Come work with us!</h3><ul><li>Whether we’re working



GPU MODE ▷ #sparsity-pruning (1 messages):

Sparsity Design, Sparsifier Functionality, Sparsify Kernel Optimization, Demo for Sparsify Usage

  • Understanding the Role of Sparsifier: A query was raised about the functionality of the Sparsifier, specifically if it’s responsible for determining the sparsity pattern.
    • It was clarified that the Sparsifier indeed determines the pattern but its output’s interaction with the kernel optimization process via sparsify_ was questioned.
  • Interaction between Sparsifier and Sparsify: The user inquired whether the sparsify_ function consumes the output of the Sparsifier during its operation.
    • Understanding this interaction is crucial for optimizing sparsity in designs and further guidance was sought.
  • Request for Sparsify Usage Demo: A request was made for a demonstration about the usage of sparsify_, highlighting the need for practical examples.
    • This demo would provide insights on how to effectively implement the sparsity design in real scenarios.

GPU MODE ▷ #self-promotion (1 messages):

alma Python Package, Model Benchmarking, PyTorch Conversion Options

  • Open-Sourcing alma: The Benchmarking Marvel: A duo has just open-sourced their project, alma, a Python package designed for benchmarking the speed of over 40 PyTorch conversion options with a single function call.
    • It boasts features like graceful failure handling and isolated processes for safer testing, aiming to simplify CI integration.
  • Future Integration Plans for alma: Future developments are aimed at adding more conversion options and integrating with JAX, llama.cpp, and VLLM for enhanced versatility.
    • The creators invite users to share ideas through GitHub, emphasizing community engagement to broaden functionality.
  • Real-World Performance Examples: Example outputs show impressive results, with EAGER mode achieving a throughput of 282395.70 samples/second on a CUDA device.
    • In EXPORT+EAGER, performance slightly enhanced with 305974.83 samples/second, showcasing the package’s efficiency.

Link mentioned: GitHub - saifhaq/alma: Contribute to saifhaq/alma development by creating an account on GitHub.


GPU MODE ▷ #thunderkittens (1 messages):

kimishpatel: what i cam here for 🙂


GPU MODE ▷ #arc-agi-2 (5 messages):

Cost of GPUs on Vast AI, Generative Flow Networks, ARC Prize Daily Puzzle, Training Smaller Models, Synthesizing Riddles

  • Vast AI Offers Cheap GPU Options: A member noted that GPUs on Vast AI are very affordable, specifically mentioning that the 3070 is a cost-effective choice for personal use.
    • Another member shared their experience, indicating they had previously checked only Lambda and Runpod for GPU options.
  • Exploring Generative Flow Networks for Dataset Generation: Discussion arose around Generative Flow Networks as a promising method for synthetic dataset generation, particularly in scenarios with costly oracle rewards.
    • A member shared a paper on the topic, highlighting the potential to reduce the labeling of (problem, reward) pairs.
  • Solving the ARC Prize Daily Puzzle: A member celebrated successfully solving the ARC Prize Daily Puzzle, emphasizing the daily challenge at 12pm UTC that requires sorting input.
    • They expressed skepticism about autoregressive models, noting their limitations in sorting by design unless employing some prior reasoning.
  • Training with Smaller Models for Efficiency: A member mentioned the practicality of iterating with smaller models trainable on 24G, suggesting an efficiency gain compared to larger models.
    • This aligns with the previous discussion about exploring low-cost GPU options for optimal training results.
  • Challenges in Riddle Synthesis: A member reflected on the difficulties of finding the right representation for generating riddles, emphasizing the need for input boards and transformations.
    • They highlighted the importance of ensuring that all relevant parameters for transformation can be derived from examples provided.

Link mentioned: ARC Prize - Play the Game: Easy for humans, hard for AI. Try ARC-AGI.


Latent Space ▷ #ai-general-chat (87 messagesđŸ”„đŸ”„):

AI Agentic Systems, Gemini 2.0 Flash Thinking, Databricks Funding, ModernBERT Release, Alec Radford Departure from OpenAI

  • AI Agentic Systems on the Rise: Anthropic shared insights on successful implementations of agentic systems and emphasized building with simple, composable patterns, suggesting 2025 will be pivotal for the field.
    • The blog post from Anthropic highlights best practices and evolving definitions of agents and workflows in AI.
  • Gemini 2.0 Flash Thinking Dominates: The introduction of Gemini 2.0 Flash Thinking showcased its reasoning capabilities, achieving top rankings across various categories and outperforming its predecessor in multiple tasks.
    • According to reports, this model explicitly shows its thought processes, improving reasoning performance more effectively.
  • Databricks Secures Major Funding: Databricks announced a Series J funding round led by Thrive Capital, raising $10 billion and achieving a valuation of $62 billion while expecting to cross a $3 billion revenue run rate.
    • This marks significant momentum for the company, reflecting a 60% year-over-year growth largely driven by AI demand.
  • ModernBERT Release Sparks Interest: The launch of ModernBERT, which offers improvements over older models with longer context and enhanced performance, has captured significant attention in the AI community.
    • Discussion around its features and potential applications has highlighted the excitement for its integration into existing workflows.
  • Alec Radford Leaves OpenAI: Alec Radford, a key figure in OpenAI’s GPT development, is departing to pursue independent research, raising questions about the organization’s future.
    • This personnel shift has led to speculation regarding OpenAI’s direction amidst other recent changes in the industry.

Links mentioned:

  • Tweet from vik (@vikhyatk): everyone’s posting đŸ€Ż tweets about this but i can tell no one has tried it out.because i did and it says the method generate doesn’t exist on module genesisQuoting Allen T. (@Mr_AllenT) This is the cr...
  • Tweet from Junyang Lin (@JustinLin610): so sorry for making you expect. nothing will happen tonight. we still need to make things better for this release. will be back very soon.
  • AI Engineer Summit: The highest-signal technical AI event of the year. For AI Engineers & Leaders, Feb 20 - 21, 2025.
  • Tweet from Lincoln 🇿🇩 (@Presidentlin): @Alibaba_Qwen They made a reasoning VL model?
  • Genesis: no description found
  • Building effective agents: A post for developers with advice and workflows for building effective AI agents
  • Tweet from ElevenLabs (@elevenlabsio): Meet Flash. Our newest model that generates speech in 75ms + application & network latency.You’ve never experienced human-like TTS this fast.
  • Tweet from lmarena.ai (formerly lmsys.org) (@lmarena_ai): Breaking news from Chatbot ArenaâšĄđŸ€”@GoogleDeepMind's Gemini-2.0-Flash-Thinking debuts as #1 across ALL categories!The leap from Gemini-2.0-Flash:- Overall: #3 → #1- Overall (Style Control): #4 → #...
  • Tweet from Stephanie Palazzolo (@steph_palazzolo): Big OpenAI personnel news w/ @erinkwoo: Alec Radford, the lead author of OpenAI's original GPT paper, is leaving to pursue independent research. https://www.theinformation.com/briefings/senior-op...
  • Tweet from swyx (@swyx): this is what it looks like to work with AGIs like @benghamine behind the sceneswondering when @recraftai or @GeminiApp or @xai can match this workflowQuoting jason liu (@jxnlco) oohh shit this is hu...
  • Tweet from Sholto Douglas (@_sholtodouglas): I really like the thoughts in this problem, a cute example of out of the box thinking. As models get stronger, taking them seriously will continue to be the right way to understand both the current ge...
  • Luzia: Your intelligent assistant at a click: Access the power of AI easily and for FREE. Luzia (I am Luzia) helps you in your day to day life with thousands of tasks, whether at work, at school, in your social moments or while pursuing your pass...
  • Databricks is Raising $10B Series J Investment at $62B Valuation: Funding led by new investor Thrive Capital Company expects to cross $3B in revenue run rate and achieve positive free cash flow in fourth quarter  
  • Tweet from Noam Shazeer (@NoamShazeer): We’ve been *thinking* about how to improve model reasoning and explainabilityIntroducing Gemini 2.0 Flash Thinking, an experimental model trained to think out loud, leading to stronger reasoning perfo...
  • Finally, a Replacement for BERT: Introducing ModernBERT: no description found
  • Tweet from Odyssey (@odysseyml): Today is a big day at @odysseyml.We're sharing Explorer, our first generative world model. We think world models are the next frontier for AI, enabling wonderful new things.To help shape this, Ed ...
  • Tweet from el.cine (@EHuanglu): AI voice is taking over!ElevenLabs just launched Flash 2.5, capable of generating lifelike film dialogue from text almost instantly.From now on, don’t believe everything you hear!
  • Tweet from Alex Volkov (Thursd/AI) (@altryne): I evaluated o1-2024-12-17 (with all 3 reasoning efforts) and gemini-2.0-flash-thinking-exp-1219 on 10 challenging questions from @AIExplainedYT simple bench and got some surprising results! Flash thin...
  • Tweet from Alex Albert (@alexalbert__): 2025 will be the year of agentic systemsThe pieces are falling into place: computer use, MCP, improved tool use. It's time to start thinking about building these systems.At Anthropic, we're se...
  • Tweet from Logan Kilpatrick (@OfficialLoganK): Just when you thought it was over... we’re introducing Gemini 2.0 Flash Thinking, a new experimental model that unlocks stronger reasoning capabilities and shows its thoughts.The model plans (with th...
  • Tweet from Zhou Xian (@zhou_xian_): Everything you love about generative models — now powered by real physics!Announcing the Genesis project — after a 24-month large-scale research collaboration involving over 20 research labs — a gener...
  • Tweet from Hamel Husain (@HamelHusain): For those wondering how ModernBert fits into RAG, a good starting point is @bclavie ‘s primer “Beyond the basics of RAG” He talks about the various kinds of encoders, when to use them, and different t...
  • Tweet from Sholto Douglas (@_sholtodouglas): A taste of what we've been thinking about recently :)Try it out! Its still a little raw, we expect it to have sharp edges - but it represents incredible algorithmic progress on test time compute.A...
  • Tweet from Jeremy Howard (@jeremyphoward): I'll get straight to the point.We trained 2 new models. Like BERT, but modern. ModernBERT.Not some hypey GenAI thing, but a proper workhorse model, for retrieval, classification, etc. Real practic...
  • Tweet from Daytona.io (@daytonaio): 🚀 Mapping the future of AI development! Here's your comprehensive guide to the AI Enablement Stack - from infrastructure to autonomous agents—a community-driven, open-source effort.A deep dive in...
  • - YouTube: no description found

OpenInterpreter ▷ #general (67 messagesđŸ”„đŸ”„):

OpenInterpreter 1.0 updates, Running commands in server mode, Google Gemini 2.0 multimodal, Local vs server command execution, OS mode functionality

  • OpenInterpreter 1.0 supports vision models: The 1.0 branch supports models with vision, allowing users to install via GitHub with pip install git+https://github.com/OpenInterpreter/open-interpreter.git@development.
    • Experiments show the --tools gui command is functional, connecting to different models or APIs as needed.
  • Server mode operation questions: A user queried how commands are executed when OI runs as a server, wondering if they run locally or on the server.
    • It was noted that some users run it in regular mode and use SSH for access, but they consider integrating a front end for efficiency.
  • Inquiry about Google Gemini 2.0 capabilities: A user expressed curiosity about the performance of Google Gemini 2.0 multimodal capabilities, specifically in OS mode.
    • Interest exists in how the new model compares to existing systems, particularly regarding its command execution features.
  • Control over local machines using OI: Discussions around OpenInterpreter’s ability to control local systems revealed limitations with mouse and code execution functionality.
    • Users reported that there are still issues with getting the expected OS mode capabilities to function fully.
  • Cleaning installations of OpenInterpreter: Concerns were raised about needing a clean installation due to issues faced with OpenInterpreter, especially after making multiple configurations.
    • Users discussed removing certain flags and adjusting commands to resolve errors and uncertainties about the setup process.

OpenInterpreter ▷ #O1 (1 messages):

O1 Channel Exploration, Understanding documentation

  • Seeking clarity on O1 functionality: A member expressed a need for a simpler explanation of how the O1 channel works after exploring it.
    • They acknowledged reading the documentation but still felt like a noob and appreciated any help offered.
  • Documentation not helpful enough: The same member pointed out that despite their efforts in reading the docs, it didn’t provide the necessary clarity.
    • They are looking for straightforward guidance to get up to speed with O1.

LM Studio ▷ #general (62 messagesđŸ”„đŸ”„):

LM Studio Model Loading Issues, Mobile Access to LM Studio, GPU Driver Problems, Image Input Models for LM Studio, Known Issues with AMD Drivers

  • LM Studio Error with Model Loading: A user reported encountering an error stating Safetensors header is unexpectedly large: bytes=2199142139136 when trying to load models, indicating potential issues with compatibility or file corruption.
    • Another user confirmed this message appeared for the MLX version of Llama 3.3 as well, leading them to redownload models in hopes of resolving the issue.
  • Connecting to LM Studio from Mobile Devices: Members discussed using LM Studio via mobile, with one user sharing an iOS app, 3Sparks Chat, that connects to the LM Studio server on a PC or Mac.
    • However, requests for an Android version were met with disappointment as there are currently no mobile solutions available.
  • Issues with Latest AMD Drivers: Users detailed problems with AMD 24.12.1 drivers, which reportedly cause system stuttering when loading models in LM Studio, indicating broader conflict with the llama.cpp rocm library.
    • Recommendations included downgrading to previous driver versions to mitigate performance issues experienced by some users.
  • Image Input Models for LM Studio: A user inquired about image input models for LM Studio, specifically for PC users, receiving information on the mlx-community/Llama-3.2-11B-Vision-Instruct-4bit model, which faced several loading issues.
    • There were discussions on model compatibility, with concerns raised about other formats not supporting Windows runtime.
  • General Hardware Configuration Discussion: Several users exchanged details about their hardware specifications, specifically on the compatibility of the 7900XTX GPU when using LM Studio and how variations in configuration can affect performance.
    • One user noted differences in experience despite similar configurations, indicating possible discrepancies in hardware performance or driver interactions.

Links mentioned:


LM Studio ▷ #hardware-discussion (3 messages):

Silicon Chips Performance, Benchmark Comparisons

  • Higher-end Silicon Chips Questioned for Speed: A member inquired whether prompt processing is faster on the higher-end silicon chips (max, pro, ultra) due to improved memory bandwidth.
    • Another member noted that these chips are not as fast as a 30/4090 model.
  • Access to Llama.cpp Benchmarks: A member shared that llama.cpp maintains benchmarks for each model at their GitHub discussion page.

OpenRouter (Alex Atallah) ▷ #announcements (1 messages):

Price reductions, Market competition

  • Gryphe Cuts Price by 7%: The price of gryphe/mythomax-l2-13b has dropped by 7% this morning, continuing the trend of price reductions in the market.
    • This is part of ongoing price wars in the competitive landscape of AI models.
  • Qwen Slashes Prices by 7.7%: Another significant 7.7% drop occurred on qwen/qwq-32b-preview as the price wars heat up.
    • These adjustments reflect the fierce competition among leading AI providers.
  • Mistral-Nemo Takes a 12.5% Hit: mistralai/mistral-nemo has seen a 12.5% price cut, indicating a proactive pricing strategy.
    • This reflects the intensifying market dynamics, as companies vie for customer attention and market share.

Links mentioned:

  • MythoMax 13B - API, Providers, Stats: One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. #merge. Run MythoMax 13B with API
  • QwQ 32B Preview - API, Providers, Stats: QwQ-32B-Preview is an experimental research model focused on AI reasoning capabilities developed by the Qwen Team. As a preview release, it demonstrates promising analytical abilities while having sev...
  • Mistral Nemo - API, Providers, Stats: A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA.The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chines...

OpenRouter (Alex Atallah) ▷ #app-showcase (1 messages):

AI Ecosystem Maps, Crowdsourced AI Enablement Stack

  • Need for a Crowdsourced AI Enablement Stack: Many VC firms have published their AI ecosystem maps, but there is a demand for a truly crowdsourced and open-source AI enablement stack.
    • This initiative aims to keep developers informed on what tools to use, ensuring they won’t waste time in their projects. More details can be found on GitHub.
  • Feedback Request on AI Enablement Logic: There is an open call for contributions and feedback on the logic and structure of this AI enablement approach.
    • The goal is to create an up-to-date resource for developers, encouraging community input and collaboration.

Link mentioned: GitHub - daytonaio/ai-enablement-stack: A Community-Driven Mapping of AI Development Tools: A Community-Driven Mapping of AI Development Tools - daytonaio/ai-enablement-stack


OpenRouter (Alex Atallah) ▷ #general (62 messagesđŸ”„đŸ”„):

DeepSeek Models, OpenRouter Issues, Model and API Discussion, Data Management, User Experience Feedback

  • Exploration of DeepSeek Models for Learning: Users are experimenting with DeepSeek-v2 and DeepSeek V2.5 for coding assistance, emphasizing the benefit of inputting entire GitHub repos for better understanding of complex projects.
    • One user mentioned how DeepSeek helped with code optimization and commenting, while another warned against using it for advanced code creation.
  • OpenRouter User Support Challenges: Several users reported issues with OpenRouter, including unexpected account problems and unclear responses from support regarding missing balances.
    • User frustrations were evident as one sought clarity on their balance disappearing, highlighting the need for improved communication from support.
  • API and Model Capability Discussions: There were questions about the o1 reasoning_effort parameter’s accessibility, indicating users’ interest in model capabilities and interfaces.
    • Users also discussed the utility of different models and the importance of privacy settings for sensitive tasks, especially regarding healthcare data.
  • User Experiences with OpenRouter Features: Participants shared their perspectives on the interface and its suitability for various uses, with some suggestions for improvements in user navigation.
    • There was a discussion about interface tagging, clarity, and the need for a more streamlined user experience in AI applications.
  • Community Interaction and Humor: Members participated in light-hearted banter and joke discussions about user bios and the silliness of online personas.
    • The community seemed supportive, with users engaging in fun commentary alongside serious inquiries about the platform.

Links mentioned:


OpenRouter (Alex Atallah) ▷ #beta-feedback (1 messages):

Programmatic feature requests, Provider API integration

  • Request for Programmatic Feature Implementation: A member expressed interest in seeing a programmatic version of a specific feature, emphasizing the ability to pass the provider API key with the request.
    • I’d love to see a programmatic version of this feature highlights the desire for increased functionality in API integration.
  • Interest in API Key Functionality: The same member reinforced the need for passing the provider API key with requests implicitly, to streamline access and improve user experience.
    • This indicates a broader interest in API features that cater to developers’ needs for flexibility and efficiency.

Nous Research AI ▷ #general (47 messagesđŸ”„):

GitHub Copilot Free Tier, Granite 3.1-8B-Instruct Model, LM Studio for Local LLMs, Model Context Protocol Testing, Gemini Flash Thinking Experimental

  • GitHub Copilot now free for all: Announced a new free tier for GitHub Copilot available immediately, with no trial or subscription required.
    • Users can take advantage of this offer without needing to provide a credit card, and it intriguingly includes Claude for enhanced functionality.
  • Granite 3.1-8B-Instruct impresses users: Users are excited about the Granite 3.1-8B-Instruct model, which has been fine-tuned for long context tasks and performs well in real-world applications.
  • LM Studio offers convenient model access: LM Studio allows users to run LLMs locally, chat with documents, and download model files from Hugging Face.
    • It supports architectures like Llama 3.2, Mistral, and Qwen 2.5, catering to those wanting offline functionality.
  • Experimenting with Model Context Protocol: A user plans to implement a quick server in Bash to test the model context protocol despite initial reservations.
    • This experiment aims to gauge the practical value of the protocol in a real-world setting.
  • Gemini Flash Thinking impresses: Gemini 2.0 Flash Thinking produced a witty response regarding the meme ‘Hello there!’ from Star Wars, highlighting its contextual relevance.
    • The final response cleverly weaved in cultural nuances and character specifics, showcasing the model’s engaging capacities.

Links mentioned:


Nous Research AI ▷ #ask-about-llms (2 messages):

Agent Message Formatting, Fine-Tuning Dataset Consistency

  • Agent messages lack sentence separation: A member noted that the latest messages from the agent are missing periods between sentences, indicating a formatting quirk.
    • They compared this behavior with gpt-4o, confirming it doesn’t exhibit the same issue.
  • Using uniform instructions in fine-tuning: A member inquired about the implications of using the same instruction across a fine-tuning dataset consisting of ‘Question’ and ‘Answer’ pairs.
    • Their concern centered on whether this approach could lead to suboptimal model performance compared to varying the instructions.

Genesis Project, Generative Physics Engine, Open Source Robotics Simulation

  • Genesis Project Revolutionizes Robotics with Real Physics: The Genesis project has been announced as a generative physics engine capable of creating 4D dynamical worlds, significantly enhancing robotics and physical AI applications.
    • Developed in pure Python, it boasts a simulation speed up to 430,000 times faster than real-time, with training times for robotic locomotion policies reduced to just 26 seconds on a single RTX4090.
  • Open Source Access to Genesis Physics Engine: The Genesis physics engine is fully open source, inviting collaboration and contributions from the community to enhance its functionality.
    • It integrates advanced physics solvers to simulate entire physical worlds, aiming for a completely automated data generation process for robotics.
  • Tutorial for Robotic Locomotion with Genesis: A comprehensive tutorial explains how to train a robotic locomotion policy utilizing the Genesis physics engine.
    • This training process showcases the engine’s efficiency, which is 10-80 times faster than existing GPU-accelerated solutions like Isaac Gym.

Links mentioned:


Modular (Mojo đŸ”„) ▷ #mojo (37 messagesđŸ”„):

Mojo Indexing and Casting, SIMD Keying in Dict, Running Mojo on Android, Python Integration Ideas, Negative Indexing Debate

  • Debate on Mojo Indexing Practices: A discussion emerged regarding the use of Int for indexing in Mojo, with opinions split on whether negative indexing should be integrated into default implementations or if alternatives like .last() suffice.
    • Darkmatter argued that negative indexing is often a programming error, stating it introduces unnecessary operational costs, while others highlighted its common usage in languages like Python.
  • Bug in SIMD Structs and Dicts: A significant bug with missing scaling_cur_freq in Mojo was mentioned, causing segmentation faults when using a struct based on SIMD as a key in Dicts and also affecting benchmarks.
    • This bug is documented in GitHub Issue #3781 which details steps to reproduce and seeks resolution within the suggested 6-week window.
  • Running Mojo on Native Android: Some members discussed the possibility of running Mojo on native Android, mentioning a setup via Magic in a Docker container, although it is considered ‘wildly unsupported’.
    • It was noted that while self-setup is possible, licensing rules prevent the creation of a publicly distributable Docker image.
  • Python Integration Considerations: There was an inquiry about creating Python types for Mojo, specifically examining the integration of SIMD and conditional conformance to enable support for various data types.
    • Concerns were raised about maintaining separate handling for integral and floating-point types due to ABI requirements, while the idea of supporting arbitrary bit-width integers was met with enthusiasm.
  • Safety and Efficiency in Indexing: The discussion on indexing raised safety concerns, discussing the implications of implicit type casting from UInt to Int and the performance costs associated with checks for negative indices.
    • Darkmatter suggested that while overloads could be implemented, they would complicate existing type casting rules and potentially introduce ambiguity.

Link mentioned: [BUG] Segfault if using a struct based on SIMD as key in Dict · Issue #3781 · modularml/mojo: Bug description When using a struct containing a sufficiently large SIMD as key in a Dict, a segmentation fault is encountered. Steps to reproduce Execute the following code: from collections impor



DSPy ▷ #general (28 messagesđŸ”„):

Synthetic Data Primer, Rate Limiting in DataBricks, DSPy Signature Outputs, Provisioned Throughput Costs, LiteLLM Proxy Layer

  • Explainer on Synthetic Data in Progress: A member is working on an explainer that covers the fundamentals of synthetic data, how it is created, its uses, and its impact on model capabilities.
    • They are looking for community input on specific questions or areas of curiosity about synthetic data.
  • Rate Limiting Solutions in DataBricks: A member discussed the potential to implement a rate limiter within DataBricks due to high costs incurred from throughput allocations.
    • Another suggested using the LiteLLM proxy layer for features like rate-limiting and budgeting.
  • Question on DSPy Signature Class Outputs: A user inquired about examples of producing a dspy.Signature as a class type instead of a string, expressing interest in using the DSPy framework.
    • They are exploring the feasibility of directly returning a signature with specified fields.
  • Concerns Over Provisioned Throughput Costs: A member recounted their experience with high costs from provisioned throughput in DataBricks, raising concerns about unnecessary charges.
    • They clarified the importance of enabling the scale to 0 option to avoid charges when not in use.
  • Deploying LiteLLM in DataBricks: There was a discussion about whether the LiteLLM proxy could be deployed within a DataBricks notebook or if a separate VM is necessary.
    • One member confirmed LiteLLM can be managed alongside the service within a controlled environment.

Links mentioned:


LlamaIndex ▷ #blog (3 messages):

Multi-agent systems, Vectara RAG capabilities, AI journey survey

  • Building Multi-Agent Systems with LlamaIndex: A post discusses how to evolve from a single agent to a coordinated multi-agent system using LlamaIndex, providing practical code examples.
    • It emphasizes the importance of agent factories in this transition, detailed in the full article here.
  • Unlocking Vectara’s RAG Power: Discover how to leverage Vectara’s powerful RAG capabilities including loading data and querying with streaming and reranking options.
    • The post addresses building agentic RAG applications while highlighting the full capabilities of Vectara’s managed service here.
  • Participate in Vercel’s State of AI Survey: A call to action invites community members to share their progress in their AI journey through @vercel’s State of AI Survey.
    • Participants can contribute to the understanding of the AI landscape by visiting this link.

Link mentioned: State of AI Developer Survey: Share your experiences, challenges, and insights, and help shape the future of AI-driven innovation.


LlamaIndex ▷ #general (23 messagesđŸ”„):

HuggingFaceEmbedding model loading, Azure OpenAI embedding rate limits, TextNode insert errors

  • HuggingFaceEmbedding can’t load from local: A user encountered issues while trying to load a HuggingFace embedding model from local storage and received a warning about creating a new model with mean pooling.
    • Another user clarified that simply providing the model name would check the cache folder first before downloading it unnecessarily.
  • Solutions for Azure OpenAI embedding rate limits: One user reported persistent rate limit errors with Azure OpenAI embedding models and sought suggestions to resolve this issue.
    • A suggestion included increasing the max retries and ingesting documents more slowly to avoid rate limiting issues.
  • Confusion over inserting TextNodes: A user faced an AttributeError when trying to insert TextNodes into the index, indicating a missing get_doc_id attribute.
    • It was advised that the proper method for inserting nodes is insert_nodes, and that processing them one at a time might help with rate limiting.

Links mentioned:


LlamaIndex ▷ #ai-discussion (1 messages):

Vision Parse, PDF to Markdown

  • Vision Parse Library Launches for Markdown Conversion: A member shared the launch of Vision Parse, an open-source Python library that converts PDF documents into well-formatted markdown content using advanced Vision Language Models.
    • State-of-the-art technology aims to enhance the conversion experience with great formatting options.
  • Excitement Around Open Source Contributions: The community showed enthusiasm for the release of Vision Parse, highlighting its potential to simplify document handling for developers.
    • Members discussed the importance of open-source projects in fostering innovation and collaboration within the tech space.

Nomic.ai (GPT4All) ▷ #announcements (1 messages):

Data Mapping Series, Scalable Graphics, Embeddings, Dimensionality Reduction, Unstructured Data

  • Final Installment of Data Mapping Series Released: The Nomic Team announced the release of the final installment in the Data Mapping Series, focusing on scalable graphics for managing embeddings and unstructured data, which can be read here.
    • This series details how machine learning concepts like embeddings and dimensionality reduction empower users to visualize massive datasets in their web browsers.
  • Six Part Data Mapping Exploration: The latest post wraps up a six-part series aimed at elucidating the technologies behind the Nomic Atlas platform with respect to unstructured data visualization.

Link mentioned: Data Maps, Part 4: Why Are Web Browsers The Best Data Browsers?: Why Are Web Browsers The Best Data Browsers?


Nomic.ai (GPT4All) ▷ #general (17 messagesđŸ”„):

Nomic BERT issue, Code Interpreter Pull Request, Loading System Messages, GGUF File Issues, Device Requirements

  • Nomic BERT Embedding Model Issue: Users reported errors while loading Nomic’s embedding model from Huggingface due to a recent commit that broke functionality, specifically this commit. Fortunately, the issue has now been fixed.
  • Pull Request for Code Interpreter Tool: A pull request titled Code interpreter by manyoso is in progress, aimed at adding a code interpreter tool based on the jinja template. Members expressed interest in following its developments.
  • Loading System Messages Discussion: A user inquired about the possibility of a load button for loading system messages from text files, expressing frustration with copy-pasting. There seems to be a demand for this feature due to the many context-setting text files users have.
  • GGUF File Compatibility Issues: Discussions arose about various .GGUF files with broken chat templates, with mentions of files like Llama-3.3-70B-Instruct-Q4_K_M and Qwen2-72B-Instruct.Q4_K_M. Fixes for these files were promised in the next release.
  • Device Requirements for GPT4ALL: A user requested information regarding the device requirements for GPT4ALL, prompting another member to share a link to the official system requirements. This document outlines the necessary specifications for running GPT4All.

Links mentioned:


tinygrad (George Hotz) ▷ #general (16 messagesđŸ”„):

TinyChat Installation Issues, Tiktoken Replacement Discussions, Scroll Direction Bug Report, Bounty Project Engagement, Layout Notation Insights

  • TinyChat installation faces hurdles: After trying to set up TinyChat, a user reported issues with missing dependencies like tiktoken, and experienced a system freeze for ~30 seconds during installation.
    • They also noted a strange prompt about finding devices on the local network, questioning its necessity.
  • Tiktoken needs a tailored replacement: George Hotz acknowledged the need for a replacement for tiktoken and raised the question of whether it can be written directly within TinyGrad.
    • His focus on 8GB of RAM as a limitation was a key point in the discussion.
  • Scroll direction switches unexpectedly: A user reported an odd issue where the scroll direction on their Mac was reversed after running TinyChat, returning to normal after terminating the application.
    • George Hotz expressed surprise over this problem, confirming it’s perplexing.
  • Bounty Project engagement strategies: Chenyu mentioned that the goal of the bounties is to advance the project and emphasized engaging with contributors who add value through tests and improvements.
    • They pointed to contributions in the form of tests and optimization discussions as essential to driving progress.
  • Discussion on layout notation: A user shared thoughts on layout notation being powerful yet complex, noting the effectiveness of the graphical representations in the documentation.
    • They highlighted that the complement section offers a unique perspective by describing all elements not selected, unlike traditional masks.

Links mentioned:

  • View Merges: Relevant code: https://github.com/unknownusername504/MicroGrad/blob/main/micrograd/tensors/viewable_tensor.py What is the goal of a view? Moving memory around is an expensive operation. If you have ...
  • Issues · tinygrad/tinygrad: You like pytorch? You like micrograd? You love tinygrad! ❀ - Issues · tinygrad/tinygrad

tinygrad (George Hotz) ▷ #learn-tinygrad (1 messages):

khaner2162: Hi why does scheduler # realize before expand or unsafe pad ops?


Cohere ▷ #discussions (6 messages):

Introduction of Ikuo618, Reminder about channel etiquette

  • Ikuo618 introduces himself: Ikuo618 shared his background as a senior AI developer with over 6 years of experience in building and deploying AI models in DP, NLP, and CV.
    • He highlighted his expertise in Python, as well as his proficiency with TensorFlow and PyTorch to develop intelligent systems.
  • Etiquette reminder on reposting messages: A reminder was issued to a user not to repost their messages in multiple channels to maintain chat organization.
    • This serves as a gentle nudge for everyone to respect channel etiquette while participating in discussions.

Cohere ▷ #questions (2 messages):

Platform Availability

  • Confirmation on Platform Status: A member inquired if a specific feature is available on the platform, to which another member confirmed, saying it is not on the platform yet.
    • The inquiring member expressed gratitude for the confirmation with a smiley face.
  • User Interaction in Confirmation: The interaction showcased a friendly exchange, with one user confirming a feature’s absence on the platform.
    • The response highlighted positive engagement between users, as one expressed appreciation for the confirmation.

Cohere ▷ #api-discussions (3 messages):

Cohere API pricing, API keys types, Rate limits for endpoints

  • Cohere API offers free and paid keys: Cohere provides two types of API keys: evaluation keys which are free but come with limited usage and production keys which are paid and offer much less limitation.
  • Detailed rate limits for Cohere API: The Cohere API has specific rate limits for each endpoint, with trial limits significantly lower than production ones; for example, the Chat endpoint is limited to 20 calls per minute for trial users and 500 per minute for production users.
    • Other endpoints like Embed and Classify also have distinct limits, stacking up to 1,000 calls per month for all endpoints.

Link mentioned: API Keys and Rate Limits — Cohere: This page describes Cohere API rate limits for production and evaluation keys.


Cohere ▷ #cmd-r-bot (1 messages):

ikuo618: hi
!


Cohere ▷ #projects (1 messages):

benny0917: Good looking product <@799853279017173033> congrats!


Torchtune ▷ #general (6 messages):

Torchtune Phi 4 Support, New Contributor Role, Implementation Differences Between Phi 3 and Phi 4

  • Torchtune currently lacks Phi 4 support: A member inquired about using Torchtune for Phi 4, to which it was confirmed that support currently lies only with Phi 3 and contributions for Phi 4 are welcomed.
    • Members expressed interest in potentially contributing to enable Phi 4 support.
  • New Contributor Role Introduced: A new Contributor role was launched on Discord to recognize community members who enhance Torchtune for everyone.
    • This initiative aims to acknowledge contributions and bridge the gap between GitHub and Discord usernames.
  • Minimal Differences Anticipated for Phi 4: A discussion arose concerning the implementation differences between Phi 3 and Phi 4, with one member noting they appear to be very minimal.
    • An image was shared that seemingly supports this statement, sparking further curiosity about the changes.

Link mentioned: torchtune.models — torchtune 0.4 documentation: no description found


Torchtune ▷ #papers (2 messages):

Asynchronous RLHF, Post-Training Techniques, Model Safety and Robustness

  • Asynchronous Approach in RLHF: The traditional RLHF method is computationally inefficient; however, separating generation and learning enables faster, asynchronous training of models, as suggested in the research on online but off-policy RLHF.
    • This research highlights a key question: how much off-policyness can we tolerate while ensuring efficient learning without sacrificing performance.
  • Importance of Post-Training for Models: Models require post-training after the pre-training stage to ensure they are safe and follow human instructions effectively, as discussed in the Allen AI blog.
    • The process involves instruction fine-tuning and learning from human feedback to avoid eroding essential capabilities during specialization.
  • Challenges of Instruction Tuning: Post-training methods, initially inspired by InstructGPT, can lead to a decline in certain model capabilities as more specialized skills are taught.
    • Finding the balance between enhancing capabilities like coding while retaining skills for poetry and instruction following remains a complex challenge.

Links mentioned:


LLM Agents (Berkeley MOOC) ▷ #hackathon-announcements (1 messages):

Hackathon submission deadline, LLM Agents Hackathon, Final reminders, Project submissions, Last-minute questions

  • Final Call for Hackathon Submissions!: A reminder has been issued that the submission deadline for the hackathon is tonight at 11:59 PM PST (12/19).
  • Support for Last-Minute Questions: Participants are encouraged to drop any last-minute questions in the channel for assistance.
    • The community is rallying support to ensure everyone can finish strong before the deadline.







{% else %}

The full channel by channel breakdowns have been truncated for email.

If you want the full breakdown, please visit the web version of this email: [{{ email.subject }}]({{ email_url }})!

If you enjoyed AInews, please share with a friend! Thanks in advance!

{% endif %}