Frozen AI News archive

Test-Time Training, MobileLLM, Lilian Weng on Hallucination (Plus: Turbopuffer)

**Lilian Weng** released a comprehensive literature review on **hallucination detection** and **anti-hallucination methods** including techniques like FactualityPrompt, SelfCheckGPT, and WebGPT. **Facebook AI Research (FAIR)** published **MobileLLM**, a sub-billion parameter on-device language model architecture achieving performance comparable to **llama-2-7b** with innovations like thin and deep models and shared weights. A new **RNN-based LLM architecture** with expressive hidden states was introduced, replacing attention mechanisms and scaling better than Mamba and Transformer models for long-context modeling. Additionally, **Tsinghua University** open sourced **CodeGeeX4-ALL-9B**, a multilingual code generation model excelling in code assistance.

Canonical issue URL

AI News for 7/8/2024-7/9/2024. We checked 7 subreddits, 384 Twitters and 29 Discords (463 channels, and 2038 messages) for you. Estimated reading time saved (at 200wpm): 250 minutes. You can now tag @smol_ai for AINews discussions!

Two major stories we missed, and a new one we like but didn't want to give the whole space to:

  1. Lilian Weng on Extrinsic Hallucination: We usually drop everything when the Lil'Log updates, but she seems to have quietly shipped this absolute monster lit review without announcing it on Twitter. Lilian defines the SOTA on Hallucination Detection (FactualityPrompt, FActScore, SAFE, FacTool, SelfCheckGPT, TruthfulQA) and Anti-Hallucination Methods (RARR, FAVA, Rethinking with Retrieval, Self-RAG, CoVE, RECITE, ITI, FLAME, WebGPT), and ends with a brief reading list on other Hallucination eval benchmarks. We definitely need to do a lot of work on this for our Reddit recaps.
  2. MobileLLM: Optimizing Sub-Billion Parameter Language Models for On-Device Use: One of the most hyped FAIR papers published at the upcoming ICML (though not even receiving a spotlight, hmm) focusing on sub-billion scale, on-device model architecture research making a 350M model reach the same perf as Llama 2 7B, surprisingly in a chat context. Yann LeCun's highlights: 1) thin and deep, not wide 2) shared matrices for token->embedding and embedding->token; shared weights between multiple transformer blocks". image.png
  3. Learning to (Learn at Test Time): RNNs with Expressive Hidden States (advisor, author tweets): Following ICML 2020 work on Test-Time Training, Sun et al publish a "new LLM architecture, with linear complexity and expressive hidden states, for long-context modeling" that directly replace attention, "scales better (from 125M to 1.3B) than Mamba and Transformer" and "works better with longer context". image.png Main insight is replacing the hidden state of an RNN with a small neural network (instead of a feature vector for memory). image.png The basic intuition makes sense: "If you believe that training neural networks is a good way to compress information in general, then it will make sense to train a neural network to compress all these tokens." If we can nest networks all the way down, how deep does this rabbit hole go?

Turbopuffer also came out of stealth with a small well received piece.


{% if medium == 'web' %}

Table of Contents

[TOC]

{% else %}

The Table of Contents and Channel Summaries have been moved to the web version of this email: [{{ email.subject }}]({{ email_url }})!

{% endif %}


AI Twitter Recap

We are running into issues with the Twitter pipeline, please check back tomorrow.


AI Reddit Recap

Across r/LocalLlama, r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity. Comment crawling works now but has lots to improve!

AI Models and Architectures

AI Safety and Ethics

AI Applications

AI Capabilities and Concerns

Memes and Humor


AI Discord Recap

A summary of Summaries of Summaries

1. Large Language Model Advancements

2. Innovative AI Research Frontiers

3. AI Tooling and Deployment Advances

4. Ethical AI Debates and Legal Implications

5. Model Performance Optimization

6. Generative AI in Storytelling

7. AI in Education


PART 1: High level Discord summaries

HuggingFace Discord


Unsloth AI (Daniel Han) Discord


CUDA MODE Discord


Nous Research AI Discord


Modular (Mojo 🔥) Discord


LM Studio Discord


Eleuther Discord


Stability.ai (Stable Diffusion) Discord


OpenAI Discord


LlamaIndex Discord


Perplexity AI Discord


LAION Discord


OpenRouter (Alex Atallah) Discord


Latent Space Discord


LangChain AI Discord


OpenInterpreter Discord


tinygrad (George Hotz) Discord


Interconnects (Nathan Lambert) Discord


LLM Finetuning (Hamel + Dan) Discord


Cohere Discord


Mozilla AI Discord


AI Stack Devs (Yoko Li) Discord


MLOps @Chipro Discord


LLM Perf Enthusiasts AI Discord


The Alignment Lab AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The OpenAccess AI Collective (axolotl) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The Torchtune Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The DiscoResearch Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


PART 2: Detailed by-Channel summaries and links

{% if medium == 'web' %}

HuggingFace ▷ #general (291 messages🔥🔥):

  • GPTs agents and knowledge files
  • OpenAI Platform changes
  • Handling forbidden words in code
  • Context window impacts on AI models
  • Gemma model issues

Links mentioned:


HuggingFace ▷ #today-im-learning (11 messages🔥):

  • Discord bot with historical characters
  • Huggingface NLP course
  • VRAM usage fluctuation
  • Resume training from checkpoint
  • Padding and VRAM stabilization

HuggingFace ▷ #cool-finds (2 messages):

  • Generative AI on Storytelling
  • AI Knowledge Management

Link mentioned: The KMWorld AI 100: The Companies Empowering Intelligent Knowledge Management: It's easy to become overwhelmed, even awestruck at the amount of information about AI, particularly GenAI, being thrown at us on a daily basis. The ability of AI technologies to process vast amounts o...


HuggingFace ▷ #i-made-this (4 messages):

  • Intel HF models
  • Gemma2:27B update
  • New Qdurllm demo
  • Early Exit in LLM research

Links mentioned:


HuggingFace ▷ #reading-group (1 messages):

  • Pentesting in AI
  • PentestGPT

HuggingFace ▷ #computer-vision (14 messages🔥):

  • YoloV1 limitations
  • YoloV8 re-implementation
  • Emotion to body language research papers
  • Inference with fine-tuned models
  • Document image quality prediction

Link mentioned: Serializing Classifier and Regressor heads in Yolo models · Issue #10392 · ultralytics/ultralytics: Search before asking I have searched the YOLOv8 issues and discussions and found no similar questions. Question Hi team, I hope you guys are doing great. It will be great if you can share your thou...


HuggingFace ▷ #diffusion-discussions (1 messages):

  • sd-vae artifacting
  • blue and white pixels

Unsloth AI (Daniel Han) ▷ #general (136 messages🔥🔥):

  • New Documentation Website
  • Finetuning Challenges on Kaggle
  • Training Issues
  • Model Usage Requests
  • Community Contributions

Links mentioned:


Unsloth AI (Daniel Han) ▷ #off-topic (24 messages🔥):

  • Unscoming Unsloth Vision Model Support
  • Medical Data Translation with Llama 3
  • Llama 3 and Swedish
  • Training Llama 3 on Medical Data
  • Using Pre-trained Llama 3 Models on Unsloth

Link mentioned: AI-Sweden-Models/Llama-3-8B-instruct · Hugging Face: no description found


Unsloth AI (Daniel Han) ▷ #help (43 messages🔥):

  • RAG with finetuned models
  • RAFT for better responses
  • Creating synthetic datasets from PDFs
  • Speeding up inference
  • Training methods and completion-only finetuning

Links mentioned:


Unsloth AI (Daniel Han) ▷ #community-collaboration (40 messages🔥):

  • Training custom embeddings
  • Memory issues with LLaMA3
  • EfficientPartialEmbedding implementation
  • Modular Model Spec
  • SiteForge web page design generation

Links mentioned:


Unsloth AI (Daniel Han) ▷ #research (9 messages🔥):

  • MatMul-free Models in LLMs
  • Test-Time-Training layers
  • Synthetic Dataset for Chatbot
  • Enhanced Imitation Learning with Orca
  • Soft Capping in Flash Attention

Links mentioned:


CUDA MODE ▷ #triton (2 messages):

  • Integrating Triton Kernel with PyTorch
  • Registering Custom Functions in PyTorch
  • torch.compile and Custom Functions
  • CUDA Kernel Integrations

CUDA MODE ▷ #torch (1 messages):

  • executorch
  • vulkan backend

CUDA MODE ▷ #cool-links (1 messages):

  • FlashInfer
  • Kernel Library for LLM Serving
  • INT8 and FP8 flash attention kernels

Link mentioned: GitHub - flashinfer-ai/flashinfer: FlashInfer: Kernel Library for LLM Serving: FlashInfer: Kernel Library for LLM Serving. Contribute to flashinfer-ai/flashinfer development by creating an account on GitHub.


CUDA MODE ▷ #jobs (3 messages):

  • Job application enthusiasm
  • Team commendation
  • Positive reactions

CUDA MODE ▷ #beginner (24 messages🔥):

  • Beginner CUDA Projects
  • Flash Attention
  • Benchmarking Techniques
  • Triton for Softmax
  • Tensor Offloading

CUDA MODE ▷ #torchao (1 messages):

  • Quantization Flow Example Using Static Quantization
  • Importance of Calibration with Data

Link mentioned: Add static quantization as an example for calibration flow by jerryzh168 · Pull Request #487 · pytorch/ao: Summary: So far quantization flow API that we provided (quantize_) does not require calibration (calibrate a model with sample data), this PR added a static quantization example that serves as an e...


CUDA MODE ▷ #ring-attention (8 messages🔥):

  • Ring Attention
  • Splitting KV cache across GPUs
  • AWS g5.12xlarge instance

Link mentioned: kv-calc.py: GitHub Gist: instantly share code, notes, and snippets.


CUDA MODE ▷ #triton-puzzles (1 messages):

  • Puzzle 9 explanation
  • Problem statement confusion

CUDA MODE ▷ #llmdotc (176 messages🔥🔥):

  • Llama model framework improvements
  • llm.cpp updates
  • Zero initialization impact on NLP models
  • MuP library integration
  • Fast inference strategy

Links mentioned:


Nous Research AI ▷ #off-topic (1 messages):

  • Error PDF Discussion
  • Gary Marcus and Yann LeCun GIF

Link mentioned: Gary Marcus Yann Lecun GIF - Gary Marcus Yann LeCun Lecun - Discover & Share GIFs: Click to view the GIF


Nous Research AI ▷ #interesting-links (1 messages):

metaldragon01: https://x.com/stefan_fee/status/1810695036432232576


Nous Research AI ▷ #general (117 messages🔥🔥):

  • Impact of AI on Jobs
  • Hermes 2 Pro
  • Jailbreaking LLMs
  • Worldsim Console
  • Sonnet Model Capabilities

Links mentioned:


Nous Research AI ▷ #ask-about-llms (8 messages🔥):

  • LLMs and classification
  • BAML usage
  • Synthetic data generation tools
  • Processing PDFs using Sonnet 3.5 API
  • Fine-tuning with weight keys

Links mentioned:


Nous Research AI ▷ #rag-dataset (88 messages🔥🔥):

  • RankRAG
  • Zero-Shot Prompting
  • Function Calling in RAG
  • Structured Scratch Pad
  • Llama3-RankRAG

Link mentioned: Tweet from Rohan Paul (@rohanpaul_ai): Incredible results for the RAG world from @nvidia model 👏. Llama3-RankRAG from @nvidia significantly outperforms GPT-4 models on 9 knowledge-intensive benchmarks. 🤯 📌 Performs comparably to GPT-4...


Modular (Mojo 🔥) ▷ #general (5 messages):

  • Primeagen interviews Chris Lattner
  • Mojo book
  • Community resources for AI with Mojo
  • Qualcomm SNPE with Mojo

Links mentioned:


Modular (Mojo 🔥) ▷ #💬︱twitter (1 messages):

ModularBot: From Modular: https://twitter.com/Modular/status/1810782477079957831


Modular (Mojo 🔥) ▷ #✍︱blog (3 messages):

  • Bringing your own PyTorch model with Modular
  • Develop Locally, Deploy Globally
  • Taking Control of AI

Links mentioned:


Modular (Mojo 🔥) ▷ #tech-news (1 messages):

helehex: i hear that mr. lattner has something special going on tomorrow with the primeagen


Modular (Mojo 🔥) ▷ #🔥mojo (129 messages🔥🔥):

  • Code optimization discussions
  • Mojo and Python integration
  • Mojo language features
  • Mojo documentation and resources
  • Reference and value semantics debate

Links mentioned:


Modular (Mojo 🔥) ▷ #performance-and-benchmarks (3 messages):

  • Clock Calibration Issue
  • Timer Cycle Functions

Modular (Mojo 🔥) ▷ #📰︱newsletter (1 messages):

Zapier: Modverse Weekly - Issue 39 https://www.modular.com/modverse/modverse-weekly-issue-39


Modular (Mojo 🔥) ▷ #nightly (19 messages🔥):

  • Mojo Compiler Nightly Update
  • Conditional Conformance in Mojo
  • Handling Unix FIFO in Mojo
  • Load Iris Dataset in Mojo
  • Mojo Language Improvements

Links mentioned:


Modular (Mojo 🔥) ▷ #mojo-marathons (17 messages🔥):

  • Vectorization Performance
  • Algorithm Benchmarking
  • Load/Store Issues
  • Benchmark Stabilization Tips

Link mentioned: How to get consistent results when benchmarking on Linux? | Easyperf : no description found


LM Studio ▷ #💬-general (54 messages🔥):

  • Custom Voices with LLM
  • Image Generation and Tools
  • Local Perplexica with LM Studio
  • Running LLMs on Android
  • Text-to-Speech Front Ends

Links mentioned:


LM Studio ▷ #🤖-models-discussion-chat (75 messages🔥🔥):

  • InternLM context handling
  • Web scraping with LLMs
  • AI coding limitations
  • Gemma 2
  • QLo performance

Links mentioned:


LM Studio ▷ #🧠-feedback (8 messages🔥):

  • Excitement Over Upcoming Features
  • Feature Requests for Organizing Chats
  • Download Speed Issues
  • Feature Requests for Context Window Indicators

LM Studio ▷ #🎛-hardware-discussion (18 messages🔥):

  • RX 6800XT Multi-GPU Setup
  • Performance of LLama 3 70B Q7 Instruct
  • RX 6800XT vs 7900XTX
  • Building a Desktop/Server for RTX 3090
  • Concerns with AMD ROCm Support

LM Studio ▷ #🧪-beta-releases-chat (2 messages):

  • Trustworthiness of Yorkie
  • Suspicious behavior

LM Studio ▷ #amd-rocm-tech-preview (1 messages):

  • AMD graphics card 7700XT
  • LM Studio update issue
  • Fimbulvetr Q4_K_M model performance

LM Studio ▷ #🛠-dev-chat (7 messages):

  • LM Studio GPU offload
  • Long context issues on Linux
  • Bug report process
  • Context requires RAM advice

Eleuther ▷ #general (32 messages🔥):

  • Desired output distributions in models
  • Chinchilla vs Gopher training computation
  • Test time training
  • Synthetic data generation tools

Links mentioned:


Eleuther ▷ #research (77 messages🔥🔥):

  • TTT-Linear and Delta Rule
  • TTT-MLP Optimization
  • Data Attribution with In-Run Data Shapley
  • Gradient Normalization Techniques
  • Emerging RNN Architectures vs. Transformers

Links mentioned:


Eleuther ▷ #scaling-laws (9 messages🔥):

  • Brain size evolution
  • Intelligence and evolutionary benefits
  • Linearity of brain size
  • Neuronal density and intelligence

Link mentioned: Brain size riddle solved as humans exceed evolutionary trend: The largest animals do not have proportionally bigger brains—with humans bucking this trend—a study published in Nature Ecology & Evolution has revealed.


Eleuther ▷ #interpretability-general (2 messages):

  • EleutherAI at ICML
  • ICML papers announcement

Eleuther ▷ #lm-thunderdome (6 messages):

  • Chain-of-Thought reasoning in models
  • Model's access to answer choices
  • RegexFilter for MedQA
  • Sampler initialization error
  • Error troubleshooting

Eleuther ▷ #gpt-neox-dev (2 messages):

  • Containers on Kubernetes
  • Pods with Neox Image

Stability.ai (Stable Diffusion) ▷ #general-chat (119 messages🔥🔥):

  • Model Training Techniques
  • Booru Tags in AI
  • Role of AI in Society
  • SD Extensions and Tools

Links mentioned:


OpenAI ▷ #ai-discussions (67 messages🔥🔥):

  • DALL-E alternatives
  • AI text detectors
  • StableDiffusion
  • Diffusion tools
  • AI model recommendations

Links mentioned:


OpenAI ▷ #gpt-4-discussions (9 messages🔥):

  • Monetization for GPTs
  • VPN causing issues with GPTs
  • Server problems resolved
  • Consistency in GPT responses
  • User dissatisfaction

OpenAI ▷ #prompt-engineering (2 messages):

  • Content creation strategies
  • Audience engagement
  • Platform optimization
  • Content calendar structure
  • Key metrics for content success

OpenAI ▷ #api-discussions (2 messages):

  • Content Creation
  • Audience Engagement
  • Social Media Strategy
  • Content Calendar
  • Metrics Tracking

LlamaIndex ▷ #announcements (1 messages):

  • LlamaCloud Beta Release
  • Data Quality
  • Scalability Hurdles
  • LlamaParse Integration

Links mentioned:


LlamaIndex ▷ #blog (3 messages):

  • Property Graphs in LlamaIndex
  • LlamaCloud beta release
  • AGI House hackathon

Link mentioned: AGI House: no description found


LlamaIndex ▷ #general (65 messages🔥🔥):

  • E-commerce RAG chatbot enhancements
  • FlagEmbeddingReranker import error
  • Rate limit issues with Groq API
  • Handling large datasets for chatbots
  • astream_chat implementation issue

Links mentioned:


Perplexity AI ▷ #general (43 messages🔥):

  • Arc Search Recommendation
  • Context Issues in Perplexity
  • Notion Integration with Perplexity
  • Claude 3.5 and Gemini 1.5 Comparison
  • API Credit Clarification

Links mentioned:


Perplexity AI ▷ #sharing (10 messages🔥):

  • Antikythera Mechanism
  • Nothing's New Phone
  • Hydrogen Cars
  • Boeing Guilty Plea
  • Digital Advertising in South Korea

Links mentioned:


Perplexity AI ▷ #pplx-api (10 messages🔥):

  • API vs UI results
  • Nodemon setup issues with PPLX library
  • Rate limits and citation feature increases

LAION ▷ #general (37 messages🔥):

  • Deepspeed Efficiency
  • Open Source Video Upscalers
  • PaintsUndo Project
  • AI System Copyright Lawsuit
  • Copyright Term Opinions

Links mentioned:


LAION ▷ #research (13 messages🔥):

  • Generative Chameleon
  • Complex-Valued Architectures
  • Vision Architecture with 2D DFT
  • Training Challenges
  • Model Scaling Issues

LAION ▷ #resources (1 messages):

  • Image Diffusion Models Repository
  • GitHub Repo for Image Diffusion
  • Educational Codes for Image Diffusion

Link mentioned: GitHub - swookey-thinky/mindiffusion: Repository of lessons exploring image diffusion models, focused on understanding and education.: Repository of lessons exploring image diffusion models, focused on understanding and education. - swookey-thinky/mindiffusion


OpenRouter (Alex Atallah) ▷ #general (47 messages🔥):

  • Quota Exceeded Issue
  • Image Viewing Issues
  • Dolphin 2.9 Mixstral on OpenRouter in LangChain
  • Mistralai Mixtral v0.1 Error
  • LLM Applications for Language Translation

Links mentioned:


Latent Space ▷ #ai-general-chat (27 messages🔥):

  • Claude Contest Reminder
  • Nuance in Speech Models
  • AI Math Competition Success
  • Supermaven's Babble Upgrade
  • Lillian Weng's Blog on Hallucinations

Links mentioned:


LangChain AI ▷ #general (16 messages🔥):

  • LLMWhisperer PDF Extraction
  • Multi-agent chatbot issues in LangChain
  • Crawlee for Python launch
  • Question answering over PDF docs using RAG
  • ConversationSummaryMemory in LangChain

Links mentioned:


LangChain AI ▷ #share-your-work (5 messages):

  • Llamapp
  • Slack Bot Agent Guide
  • Rubik's AI Pro Beta Testing
  • RAG Article
  • Web Data Extraction LLMs

Links mentioned:


LangChain AI ▷ #tutorials (1 messages):

  • Slack Bot Agent
  • Composio and LangChain
  • PR review automation with OpenAI and ChatGPT

Link mentioned: Slack Bot Agent to review PRs: This guide provides detailed steps to create a Slack Bot Agent that leverages agentic frameworks, OpenAI and ChatGPT to review PRs every time they're created.


OpenInterpreter ▷ #general (20 messages🔥):

  • OI executes with code examples
  • Misplaced self-advertising
  • Using '--model i' with local vision mode
  • 'i model' functionality
  • Qwen 2 7b issues

Link mentioned: open-interpreter/interpreter/terminal_interface/profiles/defaults/os.py at main · OpenInterpreter/open-interpreter: A natural language interface for computers. Contribute to OpenInterpreter/open-interpreter development by creating an account on GitHub.


tinygrad (George Hotz) ▷ #general (9 messages🔥):

  • NV=1 support
  • Compatibility with architectures older than Ampere
  • George Hotz's comments on compatibility
  • Potential community contributions for older architectures
  • GSP firmware-based generations

tinygrad (George Hotz) ▷ #learn-tinygrad (9 messages🔥):

  • Recommended video courses for learning tinygrad
  • Issue with NV=1 on WSL2
  • CUDA compatibility on WSL2
  • NVIDIA open GPU kernel module on WSL2

Link mentioned: GitHub - NVIDIA/open-gpu-kernel-modules: NVIDIA Linux open GPU kernel module source: NVIDIA Linux open GPU kernel module source. Contribute to NVIDIA/open-gpu-kernel-modules development by creating an account on GitHub.


Interconnects (Nathan Lambert) ▷ #news (7 messages):

  • GitHub Copilot lawsuit
  • Developer concerns on Copilot
  • Legal implications for Microsoft and OpenAI

Link mentioned: Judge dismisses DMCA copyright claim in GitHub Copilot suit: A few devs versus the powerful forces of Redmond – who did you think was going to win?


Interconnects (Nathan Lambert) ▷ #ml-questions (4 messages):

  • Control Vector
  • Steering Vector
  • Concept Vectors
  • Feature Clamping
  • Feature Steering

Interconnects (Nathan Lambert) ▷ #ml-drama (5 messages):

  • Google Flame paper issue
  • AI bill controversy
  • Training on test data

Link mentioned: Tweet from Helen Toner (@hlntnr): Shots fired by @Scott_Wiener 👀 Image is a letter from last week where Wiener (the state senator behind SB 1047, an AI bill in CA) directly calls out a16z and Y Combinator for "inaccurate, inflam...


LLM Finetuning (Hamel + Dan) ▷ #hugging-face (2 messages):

  • Credit Issues
  • Member Response Time

LLM Finetuning (Hamel + Dan) ▷ #zach-accelerate (7 messages):

  • Multi GPU Training Issues
  • Accelerate Configuration
  • Batch Size Impact
  • Performance Expectations
  • Debugging Techniques

Link mentioned: Troubleshoot: no description found


Cohere ▷ #general (7 messages):

  • Teaching and Learning Platform
  • CommandR RAG-Optimized Features
  • Dark Mode Release
  • Enterprise Features Adaptation

Cohere ▷ #project-sharing (1 messages):

competent: Agreed 👍


Mozilla AI ▷ #llamafile (4 messages):

  • Performance penalty in llama.cpp
  • Benchmark suite upgrade issues

AI Stack Devs (Yoko Li) ▷ #app-showcase (1 messages):

__n2k: ^ I made those watermelons 🍉 😄


AI Stack Devs (Yoko Li) ▷ #events (1 messages):

  • Book to Game Jam
  • Rosebud AI
  • Puzzle games
  • Rhythm games
  • Text-based adventures

Link mentioned: Tweet from Rosie @ Rosebud AI 🌹 (@Rosebud_AI): Books turned into games with AI 🌹 Our recent jam had devs use Rosebud AI to create games from literary works, and these are the results! Winners will be revealed this Wednesday July 10th at 11:30 A...


MLOps @Chipro ▷ #events (1 messages):

  • KAN authors
  • alphaXiv forum
  • arXiv paper discussion

Link mentioned: alphaXiv: no description found


MLOps @Chipro ▷ #general-ml (1 messages):

  • Information Retrieval
  • Recommendations
  • Podcast Guests
  • Outreach

LLM Perf Enthusiasts AI ▷ #general (1 messages):

frandecam: Does anyone know if there is a OpenAI 10K credits or similar program for Anthropic?






{% else %}

The full channel by channel breakdowns have been truncated for email.

If you want the full breakdown, please visit the web version of this email: [{{ email.subject }}]({{ email_url }})!

If you enjoyed AInews, please share with a friend! Thanks in advance!

{% endif %}