> AI News for 3/19/2024-3/20/2024. We checked [**358** Twitters](https://twitter.com/i/lists/1585430245762441216) and **21** Discords (**337** channels, and **9841** messages) for you. Estimated reading time saved (at 200wpm): **1033 minutes**.

It’s often said that a key sign a startup has gone bad is ā€œshipping and dippingā€ - the tendency of top performers who have decided to leave, to try to ship something and leave on a high note. This has just happened to both:

Senior departures are a fact of life in chaotic startups, but these do feel more major than most. It could be the start of a consolidation/cooling wave in the ā€œhotā€/ā€œGPU-richā€ startup area, but we’re not quiiiite ready to call it yet. Consider us on alert though.


Table of Contents

[TOC]


PART X: AI Twitter Recap

all recaps done by Claude 3 Opus, best of 4 runs

Microsoft AI

  • @mustafasuleyman: Mustafa Suleyman is joining Microsoft as CEO of Microsoft AI, leading consumer AI products and research like Copilot, Bing and Edge (3.5M views)
  • @yusuf_i_mehdi: Microsoft Azure and NVIDIA’s Grace Blackwell 200 Superchip are coming together (8k views)
  • @saranormous: Satya Nadella got a card from Sara Norman about Mustafa Suleyman joining Microsoft (21k views)
  • @kylebrussell: Mustafa Suleyman joining Microsoft seems like a weird acquisition, perhaps for regulatory or cost reasons (13k views)

Inflection AI

  • @inflectionAI: Inflection AI is announcing important updates as a new phase for the company begins (505k views)
  • @vikhyatk: Inflection AI raised $1.3B at a $4B valuation less than a year ago (78k views)
  • @kylebrussell: Inflection AI investors and early employees got rug pulled (38k views)
  • @inflectionAI: Inflection AI 2.5 will soon be available by API to accelerate their mission of creating a personal AI for everyone (36k views)

Google DeepMind

  • @GoogleDeepMind: Google DeepMind is announcing TacticAI, an AI assistant capable of offering insights to football experts on corner kicks, developed with Liverpool FC (373k views)
  • @GoogleDeepMind: Eight years ago, AlphaGo made history by becoming the first AI to defeat world champion Lee Sae Dol in the game of Go (109k views)
  • @GoogleDeepMind: TacticAI uses a geometric deep learning approach to tackle the problem of limited corner kick data in Premier League matches (31k views)
  • @GoogleDeepMind: In a blind test, football experts found TacticAI’s suggestions indistinguishable from real corners and favored its tactical advice 90% of the time (21k views)

Anthropic

  • @AnthropicAI: Anthropic’s Claude 3 Haiku and Claude 3 Sonnet are now generally available on Google Cloud’s Vertex AI platform, with Claude 3 Opus coming soon (102k views)
  • @abacaj: Anthropic cooked with the Claude 3 model (31k views)
  • @abacaj: Claude gives working code based on new documentation not in training sets, unlike GPT-4 (8k views)
  • @abacaj: Anthropic redeemed themselves with the Claude 3 model after their model sucked last year (4k views)

AI Safety and Risks

  • @AISafetyMemes: NVIDIA announces AI nurses that are 90% cheaper than humans and outperform them at bedside manner, raising concerns about job displacement and AI progress (77k views)
  • @ClementDelangue: Concentration of power is the biggest risk in AI (62k views)
  • @AISafetyMemes: Zvi Moshowitz warns about the risks of AI systems like Devin that can make plans, handle obstacles, and potentially lead to recursive self-improvement (25k views)
  • @abacaj: AGI in the hands of a single organization is increasingly likely and keeps Aiden Gomez up at night (25k views)

AI Benchmarks and Evaluations

  • @yanndubs: AlpacaEval is now length-controlled, with 0.98 correlation to Chat Arena, simple interpretation, and robustness to length gamification (30k views)
  • @deliprao: Grok’s performance doesn’t seem better than Mixtral, which is an order of magnitude smaller (9k views)
  • @yanndubs: AlpacaEval uses length control by default but still shows non-LC metrics (547 views)
  • @deliprao: LLM development today is not as efficient as it could be due to underinvestment in novel benchmarking methods (894 views)

AI Assistants and Agents

  • @aryxnsharma: Induced AI is opening early access to the first public and free autonomous web agent API (15k views)
  • @aryxnsharma: A browser agent is a gateway to building a full digital remote worker (2k views)
  • @yoheinakajima: Yohei Nakajima’s agent logs internal function calls with time, type, name, arguments and results (1.4k views)
  • @aryxnsharma: An early Induced AI user built a macOS menu bar using their agent API (497 views)

AI Coding Assistants

  • @svpino: In the future, coding may become a hobby as building software will be more about problem-solving and thinking than writing code (93k views)
  • @LangChainAI: LangChain and Modal are hosting a webinar on building code agents in production without pain (8.5k views)
  • @corbtt: Next-gen coding assistants will operate at the level of abstraction of Amazon’s ā€œsix-pagerā€ detailed design docs (537 views)
  • @rameerez: Ramiro Berrelleza made a GPT for Hotwire docs by downloading them in Markdown and feeding into GPT (234 views)

AI Avatars and Video

  • @suno_ai_: Suno AI is releasing V3 of their audio generation model (197k views)
  • @AravSrinivas: Suno AI is the next AI unicorn, with amazing iteration speed, sound quality, and increasingly being used over Spotify (327k views)
  • @synthesiaIO: Synthesia was named one of Fast Company’s Most Innovative Companies for their AI video and avatar technology (1k views)
  • @AssemblyAI: New AI-powered content creation tools are coming out frequently, including for video, podcasting, avatars and more (812 views)

Memes and Humor

  • @nearcyan: Underused strategy in life meme (417k views)
  • @AISafetyMemes: ā€œIt’s magicā€ meme about AI (19k views)
  • @DeepLearningAI: Meme about AI-generated videos not looking cool, originally from /ProgrammerHumor on Reddit (7k views)
  • @AISafetyMemes: Meme about an AI-generated death metal song based on Claude’s messages (7k views)

PART 0: Summary of Summaries of Summaries

we are concluding that Claude Opus is just the best model for top level summaries so we’re discontinuing the A/B/C tests (see archives for our struggles/record). We’ll be exposing parallel runs for all 3 + more models (incl Gemini 1.5!!) as this problem is topologically similar to our personalization app we’ll be launching.

  • Grok-1 Release Sparks Debate: The open-source release of Grok-1, a 314B parameter Mixture-of-Experts model from xAI under the Apache 2.0 license, has generated significant discussion. While some are excited about its potential, others question its practicality given the immense GPU requirements for inference. Comparisons are drawn to other large models like GPT-4 and Claude, with speculation about Grok-1’s training data and architecture choices.

  • Nvidia’s Next-Gen GPUs Hint at AI Advances: Rumors about Nvidia’s upcoming GeForce RTX 50-series ā€œBlackwellā€ graphics cards implementing GDDR7 memory at 28 Gbps have piqued interest, as detailed in a TechPowerUp article. Discussions also touched on CEO Jensen Huang’s GTC keynote potentially confirming GPT-4’s 1.8 trillion parameter Mixture-of-Experts architecture.

  • Attention Mechanisms Under the Microscope: Deep dives into attention mechanisms occurred, particularly around memory scaling properties in FlashAttention and RingAttention. A paper on Striped Attention was shared, proposing a method to improve workload balance in causal transformer models. Fundamental motivations behind attention, like overcoming fixed-length encoding limitations, were also clarified.

  • Prompt Engineering Advancements: New tools for prompt engineering were introduced, such as Prodigy by Explosion and the open-source PromptTools. Discussions emphasized treating prompt engineering as a data annotation problem and the importance of version management and model response comparison features. Innovative experiments in content translation using different personas with GPT-3.5-turbo were also shared.

  • Photonics Chips Promise Faster AI: A YouTube video highlighted breakthroughs in photonics chips claimed to be 1000 times faster, with insights from Lightmatter, a photonic supercomputer company. The potential impact on AI acceleration and high-performance computing was discussed.


PART 1: High level Discord summaries

Stability.ai (Stable Diffusion) Discord

  • Rolling Out the 3D Future with Stability AI: Stability.ai has announced Stable Video 3D (SV3D), elevating the generative model landscape with abilities to produce high-quality multi-view images and 3D meshes from single object images. Two variants, SV3D_u and SV3D_p are released, with SV3D_u specialized in generating orbital videos and the latter extending these capabilities, as detailed in the official announcement.

  • Anticipation Builds for SD3: The community awaits the distribution of SD3 invites, amid speculations of an April release; discussions involve updates and partnerships with blockchain companies.

  • Techies Tackle AI Model Quirks: Participants shared experiences and technical challenges of running different Stable Diffusion models, citing inefficiencies and issues requiring AVX instruction sets, and engaged in troubleshooting and optimizations for various hardware setups.

  • Serialization Safety and Security: Engineers sought advice on converting .pt files to SAFETENSORS format, referencing official Python documentation about the security concerns associated with object serialization using Pickle, as seen here.

  • Debating Openness in AI Licensing: A discussion unfolded around the commercial use and open-source licensing of AI models, with some community members expressing a laissez-faire attitude towards ownership once a project is open-sourced.


Perplexity AI Discord

  • Perplexity Unchains Pro Users: Perplexity Pro users can now enjoy unlimited daily queries with Claude 3 Opus, raising debates about the marketing terms like ā€œunlimitedā€ and whether they may be misleading.
  • Is ā€œUnlimitedā€ a Myth?: Engineers scrutinize the ā€œunlimitedā€ claims by Perplexity for potentially misleading customers. There’s a split opinion on the actual usage limits, and discussions touch upon the legal ramifications and company growth comparisons with Gemini Pro 1.5.
  • AI as a Household Teacher: Some are exploring using AI models, such as Claude 3 Opus, in simple explanations for educational purposes with children, while others shared their experiences with AI tools like ChatGPT Pro for promoting learning.
  • Technical Inconsistencies and Future Tech Queries: Inconsistences in the Perplexity API behavior, such as varying responses and expectations around model deprecation, sparked discussion. Queries were also made about integrating open-source models like Grok and image vision support in the API.
  • Perplexity’s Varied Use Cases and Visibility: The Perplexity AI is being used for diverse purposes, including creative writing, vulnerability management, and technology investigations. A gentle nudge was given about sharing content correctly in the Discord guide to maintain community visibility.

Unsloth AI (Daniel Han) Discord

  • GitHub Stardom for Unsloth AI: Unsloth AI revealed it’s trending on GitHub this week, boasting notable performance features with promises of 2-5X faster speeds and 70% less memory usage for QLoRA & LoRA finetuning. The community is nudged to star the project at Unsloth GitHub.

  • Grok-1 Critique by the Community: The release of Grok-1 by xAI with 314 billion parameters raised eyebrows over its practicality, given the steep GPU requirements for inference. Curiosities flare up on whether the motivation behind such a hefty model is technical advancement or a marketing strategy influenced by Elon Musk’s visionary approach.

  • Fine-Tuning Epochs and Model Size Discussions: Discord discussions turned to fine-tuning epochs, especially in the context of voice neural networks vs. LLMs, with a consensus hinting that 3 epochs could be the standard to prevent style overfitting. When matching trainable parameters to the data, a rank of 32 or 64 was recommended for datasets with about 800,000 lines.

  • Mistral Versus Gemma for Specific AI Tasks: A detailed comparison was drawn between Mistral-7b and Gemma 7b, pointing out their proficiency in domain-specific classification tasks. Technical issues related to using Huggingface Inference API with 4-bit quantization were also addressed, suggesting the use of Unsloth’s Colab notebook for successful executions.

  • Exploration of Unsloth AI Capabilities and Support: The community was active in discussing the ins and outs of model fine-tuning, conversion, and deployment using Unsloth, with a focus on open-source models like Mistral. It affirmed that while full fine-tuning isn’t currently Unsloth’s emphasis, it remains a possibility in future development. Moreover, it was made clear that the eval_steps parameter impacts the frequency of evaluations during fine-tuning, but not necessarily the outcome.

  • Poetic and Philosophical Musings in Community: In a lighter mood, the Unsloth AI guild shared a poem titled ā€œAn Appeal to A Monkeyā€, exploring themes of simplicity in our complex world and maintaining a conversational dynamic inclusive of intuitive learning examples from youngsters and serendipitous username encounters.


LM Studio Discord

  • New Kid on the Block - Command-R 35B: Support for the Command-R 35B model in LM Studio is expected soon, with a pending merge of llama.cpp pull request #6033. However, there’s an unresolved compatibility issue with llama.cpp not supporting c4ai yet, detailed here.

  • Grok-1’s Open-Source Debate: The release of Grok-1, a 314B parameter model, sparked discussion on its practicality for local use, with skepticism over the functionality without further tuning. More information can be found on the xAI blog.

  • GPU Tug-of-War - Budget vs. Performance: Conversations on hardware for running Language Models ranged from considering second-hand GPUs like the 3090 to using MacBooks. Users interested in purchasing Epyc CPUs and older Teslas like the K80 discussed the technicalities and challenges involved.

  • AMD’s ROCm Hurdles: A link to prebuilt Windows ROCm libraries for gfx1031 and gfx1032 was shared (GitHub - brknsoul/ROCmLibs). However, the AMD 6700 xt lacks official support for ROCm, presenting compatibility issues with LM Studio.

  • LM Studio Configurations at Your Fingertips: Users seeking model configuration presets for LM Studio can access a collection of JSON files at GitHub - lmstudio-ai/configs, while discussions continued about the functionality across various LM Studio versions and the call for a JSON function-calling implementation on the Local Inference Server was left unaddressed.


Nous Research AI Discord

  • RTX 50 ā€œBlackwellā€ Opts for Slower GDDR7: NVIDIA’s anticipated GeForce RTX 50-series ā€œBlackwellā€ graphics cards will use GDDR7 memory at 28 Gbps, not maxing out the 32 Gbps capability. Details reflect a conservative approach akin to the RTX 20-series’ memory speeds and can be found in a TechPowerUp article.

  • Artificial Charm to Boost AI Performance: Discussions included unconventional methods like using a network of ā€˜horny claudes’ to improve mermaid diagrams, and the concept of ā€˜rizzing up’ models for enhanced responses. The ethics of replicating features from other AI models also surfaced as a hot topic.

  • Elephant in the Room – Model Scaling Costs: Conversations highlight the Herculean costs ($300,000) and practicality concerns of scaling AI models like llama-2 13b to 20b, with members showing skepticism about the incremental benefits of upscaling, despite a wishful intrigue for a 20-billion-parameter model outperforming Mistral.

  • Grok-1’s Position Debated Among AI Titans: Grok-1, with its 314B parameters, stirred discussions as being barely more potent than GPT-3.5 and less competitive compared to smaller 7B parameter models. NVIDIA CEO Jensen Huang may have inadvertently confirmed GPT-4’s 1.8 trillion parameters. Concurrently, licensing changes allowing commercial use of Yi-9B and the merits of continual pretraining of models like Mixtral fostered a debate on practical strategies for AI progression.

  • Open Source vs. Commercial AI Models: Reflections on the potential stifling of open-source AI by commercially restrictive giants, suggesting a future where entities may customize large language models to embody their core values. This discourse envisages a robust base furnished by open-source models for ongoing development and expansion in AI systems.

  • Modeling Perplexity and Pruned Prospects: Technical exchanges included the perplexities of calculating perplexity for Llama2 and creating pruned versions of large models, such as Smallstral, a downscaled take on Mistral-7B-v0.1. Partnership efforts like Bittensor faced hurdles with chain issues and the pursuit of transactional tokens (tao) for network activities.

  • RAG’s Desirable Futures and Modes: A vision for RAG (Retrieval-Augmented Generation) models suggests markdown-like outputs and advanced reasoning capabilities. Members discussed balance in model responses, toggling between relying on internal knowledge and utilizing an exclusive external context or ā€œRAG mode.ā€ Incorporating external context, perhaps through Cohere’s span highlighting, was mused as a means to enhance RAG utility.


Eleuther Discord

  • Ivy League Courses at Your Fingertips: Ivy League courses are now openly accessible online, with a standout mention of Stanford’s CS videos on NLP. The technical community praised this democratization of high-tier educational resources.

  • Gizmos and Gadgets Galore: Discussions have highlighted intriguing tech such as the Devin AI software engineer and Figure 01, an OpenAI-assisted robot. Additionally, the AI ThoughtStream concept was mentioned alongside Self-Taught Reasoner (STaR), with further insights found in the related paper.

  • Grok This: The release of Grok-1 on GitHub has spurred debates comparing it to the likes of Claude and GPT-4, underscoring a need for independent benchmarks and finetuning insights. Concerns over AI model evaluation metrics were raised, emphasizing the use of EleutherAI’s cookbook for calculating TFLOPs.

  • Economy of Scale: Discussions within the guild touched on affordability, such as AI paper submissions to TMLR, a free option for conference submissions, and economical training times for models like Pythia-70M. General consensus indicated NVIDIA’s performance figures and research, like Optimizing Distributed Training on Frontier for Large Language Models, serve as valuable resources for hypothetical time estimates.

  • Dataset Complexity’s Impact on Language Models: In the #scaling-laws channel, members discussed language model scaling laws, dataset complexity, and the importance of selection criteria such as intrinsic entropy and lexical density for efficient pretraining practice.

  • Sampling Strings, Served Up: Inquiry about sampling from n-gram distributions led to explanations of autoregressive and hierarchical sampling methods, with tools and code examples found at the GitHub link and on the Wikipedia page for n-gram language models.

  • Harnessing the Chaos of LLM Evaluation: Numerous issues around the usage and integration of lm-eval-harness were addressed in the #lm-thunderdome channel, including concerns of default model selection behavior, evaluation benchmark consistency, and deadlocks. Users were informed that lm-eval version 0.4.2 is available on PyPI and were encouraged to assist with PR reviews.

  • Shuffle or Not to Shuffle The Pile: In #gpt-neox-dev, clarity was sought on The Pile dataset’s shuffling. While original files were not shuffled, it was clarified that the preprocessed data on HF, used by Pythia, is indeed shuffled. However, individual components might remain chronologically organized.


OpenAI Discord

  • AI Ponders its Own Understanding: Engineers debate on whether AIs possess a form of understanding or solely predict sequences of words. They draw parallels between emergent AI behaviors and human cognitive processes, questioning if AI can ever parallel human sensory experiences.

  • Debating AI Consciousness: In another philosophical turn, consciousness is under the AI lens, teasing apart the idea that AI may require less effort to interpret because it lacks human social drives for status or wealth, with implications for emotive communication.

  • GPT-4 vs. Claude-3 Throwdown: Within discussions regarding performance, Claude-3 is praised for conversational prowess whereas GPT-4 is commended for programming capabilities. Engineers are also awaiting new model releases or rollback options to earlier GPT versions, while some are troubleshooting technical issues and seeking assistance at help.openai.com.

  • GPT-5 Release Rumblings and API Advances: The community is curious about the release of GPT-5 and the methods to enable web search functionalities within GPT APIs. They’re also exploring creating AI chatbots on mobile platforms and addressing the overzealous nature of GPT filter sensitivities.

  • Prompting a Better Future in AI: Engineers are sharing strategies to combat AI’s task refusals and optimize prompt structures, particularly for classification tasks with GPT models. The dialogue includes how to handle major shifts in response patterns from platforms like ChatGPT and encourages methodical testing of prompts to balance recall with false positives.


HuggingFace Discord

  • Multi-GPU Tuning Torment: An engineer sought advice on fine-tuning cross-encoder models with multiple GPUs, pondering the necessary parameter adjustments for scaled hardware.

  • Aya Aces Up Its Sleeve: The Aya demo has been enhanced with a higher repetition penalty, and there’s a call-to-action for developers to add a slider feature to the Gradio interface.

  • H100 GPU Commands Attention: NVIDIA’s H100 GPU, with its hefty power consumption of up to 850W and integration of server CPU and GPU on a single board, captivated discussions, despite concerns about its long lead times and the challenge of accommodating its power needs in modern supercomputer racks.

  • Grok-1 Stuns with Size and Speed: The newly released Grok-1, a 314 billion parameter MoE model under Apache 2.0 license, surprised many with its gargantuan size and the speed at which it landed on HuggingFace.

  • Hunger for HuggingFace Know-How: Beginners and experts alike exchanged resources for learning HuggingFace with links to the platform’s NLP course and the acclaimed Stanford NLP textbook, while someone else inquired about accessible, deployable free LLM APIs like ollama for production deployment.


LlamaIndex Discord

  • Rethinking Retrieval with Interactivity: A tweet introduced a novel approach to RAG by treating documents as interactive elements rather than static texts to refine responses during retrievals.
  • LlamaIndex Updates and Integrations: Version 0.10.20 of LlamaIndex now includes an Instrumentation module, which along with a partnership with @langfuse, brings added observability to RAG pipelines as elaborated in this link and example notebooks.
  • Search-in-the-Chain for Enhancing QA: A paper by Shicheng Xu et al. suggested ā€œSearch-in-the-Chainā€ to improve question-answering by interleaving retrieval with planning and replanning, further discussed in this tweet.
  • Chaining OpenAI Agents and Query Tools: Discussion centered on chaining multiple OpenAI agents via LlamaIndex, with references to FunctionTool and QueryEngineTool and challenges encountered, though specifics like error messages weren’t disclosed.
  • RAG and HuggingFace Model Orchestration: Users sought advice on document preparation for RAG with Pinecone, and discussed integrating RAPTOR pack with HuggingFace models instead of OpenAI’s, accompanied by shared code from the llama_index GitHub repo.

Latent Space Discord

Yann LeCun’s Visual vs Linguistic Reasoning Debate Heats Up: LeCun theorizes that visual models may have an edge over language-focused ones due to the ability to map directly to actions, a view attributed to his lack of an inner monologue as indicated in tweets and an interview.

Revelations in Resolution: The UPSCALER tool by Scene-scenario is shaking up the image enhancement market with promises of 10k resolution uplifts, as shared in a tweet, potentially prompting Magnific to reconsider its pricing structure.

Grok-1’s Grand Entrance: xAI’s Grok-1 with 314 billion parameters is released under Apache 2.0, spurring discussions on its potential compared to other models, as seen on Grok-1’s release page and various Twitter threads.

Paper Club Dives Deep Into Large Language Models: Enthusiasts in the #llm-paper-club-west channel discuss the mechanics and efficiency of attention mechanisms in transformers, helpful for understanding the design and scalability of current LLMs.

AI In Action Club Strategizes Learning: The #ai-in-action-club showcases structured AI discussions using a shared Google spreadsheet, offering insights on contrastive embeddings and suggesting LLMs for improved vector comparisons.


LAION Discord

  • Free Jupyter in Copilot, DALL-E Dataset Moves: Microsoft Copilot Pro subscribers can now freely access Jupyter Notebooks with libraries like simpy and matplotlib, akin to the ChatGPT Plus offering. The DALL-E 3 dataset has moved to a new Hugging Face repository, available for use with the load_dataset() function, ensuring reproducibility.

  • PyTorch and xformers Align: Engineers are tackling integration issues between xformers and PyTorch, with discussions pointing towards solutions like virtual environments and installing from a PyTorch-index URL for compatible versions.

  • Metadata Magic Boosts AI Captioning: The use of metadata in captioning prompts has been touted for improving AI-generated text, as demonstrated in an example script EveryDream2trainer’s caption_cog on GitHub.

  • Vast.AI Scrutinized for Security: Conversations about Vast.AI focused on its lack of adherence to strict security protocols, such as HIPAA. Participants recommend using major cloud providers for sensitive tasks due to the inherent root access risk with Vast.AI’s SSH container setup.

  • Beyond Free Colab Constraints & Clarifications: Colab’s limitations on web UIs have been clarified, noting only the free version is incompatible. Additionally, a Google Doc detailing a Generative Audio Video Text world model, research on continuous pretraining of LLMs, and the Grok open release repo were mentioned, but specific community reactions were not detailed. Rumors about Nvidia’s GPT-4 sporting a MoE architecture with 1.8 trillion parameters were discussed, yet conclusive identification with GPT-4 was unconfirmed.


OpenAccess AI Collective (axolotl) Discord

  • Axolotl Takes LLM Training Further: Engineers discussed Axolotl as an efficient alternative to direct transformers code for fine-tuning models, using YAML files for configurations. There is interest in ScatterMoE optimizations (ScatterMoE repo) and how it might interplay with the hefty Grok model weights—although the performance underwhelms given Grok’s billed size, and attention is now on potential qLoRA FSDP handling (Grok GitHub).

  • Quantization and Next-Gen Hardware Stir Excitement: A spirited debate highlighted the use of AQML for extreme model compression (AQML GitHub). Moreover, the upcoming Nvidia RTX 5000 series is rumored to bring VRAM and bandwidth enhancements, which could revolutionize AI training (TechPowerUp article).

  • Dataset Dilemmas and Custom Solutions: LLM users encountered tokenizer foibles and HFValidationErrors due to incorrect path specifications. They sought advice on constructing custom completion datasets, with guidance to use jsonl files. Furthermore, engineers are exploring NVIDIA NeMo-Curator, a toolkit for data curation (NeMo-Curator GitHub), while model merging potentials are being assessed for solutions like Mistral.

  • Conversation on Model Compatibility and Performance: The guild has been tackling configuration concerns, seen in a user struggling with local datasets pointing for fine-tuning, coupled with a discovery of evaluation bugs potentially misleading epoch-dependent validation. In model development, compatibility is key, as evidenced by a strategy explored for merging models with uniform training formats.

  • Seeking Pathways for Adapter-Based Reinforcement: A query was raised about the feasibility of employing distinct LoRa adapters to engage in DPO (differentiable policy optimization) on separate models, indicating an exploration into more nuanced and model-specific reinforcement learning methodologies.


CUDA MODE Discord

Photonics Chips Blaze New Paths: A YouTube video titled ā€œNew Chip Breakthrough: x1000 fasterā€ introduces photonic chips claimed to offer a 1000x performance increase, presenting insights from Lightmatter’s advancements in photonic supercomputing.

Triton Trounces CUDA Puzzles: A new set of challenging Triton Puzzles was released to help users sharpen their skills, and a visualizer for Triton debugging was launched to simplify the understanding of complex load/store functions.

CUDA Community Courts Optimizations: Discussions in the CUDA channel ran deep into warp schedulers, active warps, and memories management, indicating the collective’s drive to maximize CUDA efficiency and sharing insights on project structuring for better performance.

New Strides in Machine Learning Hardware: The research group led by Prof. Mohamed Abdelfattah at Cornell University is highlighted for their work in reconfigurable computing and efficient machine learning, with an accompanying master’s level course ECE 5545 (CS 5775) detailed that dives into ML optimization for hardware systems.

Ring Flash Attention Clears the Air: Extensive deliberations occurred around memory requirements for attention mechanisms like FlashAttention and RingAttention, featuring knowledge sharing and a look at Striped Attention’s stride towards better workloads described in an associated paper.

MLSys 2024: Where Machine Learning Meets Systems: Enthusiasm builds up for the MLSys 2024 conference in May, focusing on the merging of machine learning with systems, inviting an interdisciplinary approach to pushing the boundaries of AI efficiency.


OpenRouter (Alex Atallah) Discord

  • Llama Model Prompt Compatibility Confirmed: A simple ā€œYes!ā€ confirmed that llama models can work with JSON structures containing keys such as ā€œsystemā€, ā€œuserā€, and ā€œassistantā€ when interfacing with the OpenAI JavaScript library.
  • Top-Up Tips for AI Chatbots: Users discussed payment methods for chatbot models, with the recommendation to ā€œtopup your balanceā€ in situations where a credit card isn’t connected.
  • Sonnet Soars in Roleplay: Sonnet was highlighted as the most consistent AI for roleplay, significantly outperforming others by avoiding repetition and randomness.
  • Chat with Context: Best practices for effective prompt formation were shared, involving ā€œjailbreakingā€ the model by including system context within user or assistant messages.
  • Book Breakdown to Prompts: lzlv 70B was mentioned as giving better and more consistent prompt outputs when a script is used to analyze book segments, outdoing other models which may give irrelevant results.

Links mentioned:

  • OpenRouter: A router for LLMs and other AI models
  • Groking GitHub: An open release repository for contributions on GitHub.

LangChain AI Discord

  • Choose Wisely Between astream_log and astream_events: A guild member debated the longevity of astream_log versus astream_events, with concern about possible deprecation as astream_events is still in beta. Meanwhile, an advanced research assistant and search engine project leveraging models like Claude 3 Opus and GPT-4 Turbo was announced, with two months’ free premium access for beta testers at Rubik’s AI.

  • LangChain Documentation Confuses Newbies: Addressing the complexity of LangChain documentation, members like namenot223_69478 found it difficult for beginners, prompting recommendations to dive into code post-basics. Specific feedback was solicited to streamline the documentation experience.

  • JavaScript vs. Python in RemoteRunnable: Dissimilar outcomes when using LangChain’s RemoteRunnable in JavaScript compared to Python were reported, with JavaScript failing to call /stream. Despite this snag, no recent updates were found to address streaming issues in JavaScript.

  • AI Enthusiasts Unveil Chatbots and Tools: An array of projects surfaced in the guild’s discussions, such as a chatbot for data analysis named langchain-chatbot, a Discord AI for managing bookmarks living-bookmarks, an invitation to provide insights for a health and productivity digital advisor, an AI Python scraper named Scrapegraph-ai, and a sales and research AI called lyzr-automata.

  • Tutorial Worthy AI Projects Spotlight: The Nutriheal app illustrated the versatility of AI with tools like Ollama and Pebblo, as shown in a 15-minute YouTube demo ā€œMaking an AI application in 15 minutesā€. Meanwhile, a YouTube tutorial titled ā€œPlan-and-Execute using Langgraphā€ walked through building a plan-and-execute style agent.


Interconnects (Nathan Lambert) Discord

  • Api-gate: Leaking LLM Secrets Unveiled: Researchers revealed that API calls can expose details of commercial LLMs, exploiting the softmax bottleneck to gain insights into proprietary models. These findings show feasible methods to estimate an LLM’s hidden size and revive concerns over API-protected model security.

  • Billion-Dollar Question: How Big is GPT-3.5? A study estimates OpenAI’s gpt-3.5-turbo could be around 7B parameters, sparking debates on the accuracy of this figure, especially against the backdrop of Mixture of Experts (MoE) models possibly defying such straightforward size estimations.

  • Drama Looms Over Open Source Definitions: A post by Sebastian Raschka hints at upcoming contention in the machine learning sphere, centering on OSS community standards and the definition of open-source, amid contrasts between Apache 2.0 and GPLv3 licenses.

  • Behemoth Grok-1 Open Sourced: Amidst community astonishment, Grok-1 Model, a 314B parameter MoE model, has been openly released under Apache 2.0, sparking analysis comparing its benchmarks with other models like Falcon and discussions on distribution methods.

  • Global Model Mail?: In response to the unconventional distribution of Grok-1, members jest about the logistics of shipping physical drives with model weights versus using cloud services, reflecting on the challenges and costs of making large models widely accessible.


Alignment Lab AI Discord

  • Aribus Enthusiasm Meets Confusion: An Aribus Twitter post sparked interest and discussion, but also confusion among members regarding its potential applications.

  • Hunt for HTTP Savvy Embeddings: A dialogue surfaced regarding the search for an embeddings model tailored to HTTP responses, and whether transformer models could fulfill this niche role.

  • Callout for Custom-Tuned Mistral: A request circulated for a Mistral model fine-tuned on specific datasets, including the orca-math-word-problems-200k and nvidia/OpenMathInstruct-1.

  • Grok 1: Resource-Hungry Goliath: Conversations centered around the hefty resource requirements for training and fine-tuning Grok 1, and the fine line between its high costs and performance outcomes when compared to leading models like GPT-4 and Claude.

  • MoE Training Hits Efficiency High: There was talk of an efficient MoE (Mixture of Experts) training infrastructure, although constrained by the availability of computational resources.


LLM Perf Enthusiasts AI Discord

  • The Lazy Developer’s Motto: A guild member cited Devin as inspiration for a minimalist approach in app development, advocating for simplicity and voicing dissatisfaction with the existing open-source options for tasks more complicated than a local app with filesystem control.
  • Anthropic Accusations in Tweets: A tweet by @tszzl sparked a discussion by suggesting that Anthropic could be playing the role of ā€œcontrolled oppositionā€. Members debated the motive and veracity of this controversial claim.
  • Concerns Over KPU vs. GPT-4 Comparisons: The announcement of the Knowledge Processing Unit (KPU) by Maisa led to skepticism within the guild about their benchmarking methods, especially when comparing the KPU-enhanced GPT-4 to a non-turbo version of GPT-4. Latency concerns for practical applications were also raised.
  • Deciphering KPU Architecture: Misconceptions exist about the KPU’s capabilities and architecture, with some believing it includes innovative context window management and self-evaluation techniques. Maisa’s CEO, @davipar, clarified on Twitter that KPU acts like a GPU for knowledge management and provided an API key for independent evaluations and a notebook.
  • Claude Sonnet Scales Up: Inquiry into Claude Sonnet’s performance at scale was mentioned, specifically relating to projects with high token volume, indicating a search for efficient, large-scale LLMs.

DiscoResearch Discord

  • DiscoLM has a German Language Block: The DiscoLM-mixtral-8x7b-v2 model struggles to generate German responses, with a similar issue occurring during sequence classification fine-tuning resulting in a ValueError. This extends to other German language models, as feedback on Reddit highlights performance comparisons, for instance, between SauerkrautLM Una SOLAR Instruct and Starling LM 7B Alpha (Reddit Feedback).

  • DiscoLM-70b Hits a Snag: An attempt to run DiscoLM-70b locally using vllm met with issues, suggesting the possibility of compatibility problems despite being on a resource-rich machine.

  • Grokking the Durability of Grok: The Grok model, boasting 314B parameters, was shared in a GitHub link, with discussions around the practicality of running it considering it only requires 300GB of memory.

  • Benchmarking the German Language Models: Conversations covered the need for better benchmarks for German language models, referencing a potentially influential paper titled supergleber-german-language-evaluation-benchmark, and hinted at a collaboration with university research, with a nod to the Informatik Uni Würzburg.

  • Server Migration Not a Piece of Cake: The shift of a demo server from a home setup to a professional facility has been marred by networking glitches, with the hope to get issues ironed out by the following week. Comparatively, machines hosted at home were said to encounter fewer issues, adding a humorous take on the server reliability debate.


Datasette - LLM (@SimonW) Discord

  • Prompt Engineering Gets a Boost: Prodigy by Explosion integrates new tools for prompt engineering, aiming to streamline the process for engineers. The tools focus on treating prompt engineering as a data annotation challenge, though practical applications may see limitations.

  • Open-source and SDK Aids for Prompt Experimentation: Engineers discussed various resources for prompt testing such as the open-source PromptTools and Vercel AI SDK, each offering different user experiences but highlighted for lack of version management and ease of model response comparison, respectively.

  • Helicone AI Enters the Prompt Management Arena: Helicone AI is emerging as a potential all-in-one solution for prompt management, starting to develop prompt management and analysis tools that could be pivotal in handling complex AI tasks.

  • Exploring Multilingual Persona-Based Content Translation: A shared blog post detailed an innovative experiment in utilizing GPT-3.5-turbo for translating content through different personas, providing insights into the flexibility of language models in content creation.

  • In Search of a Seed: A single query posed by a user touches on the recoverability of seeds used by OpenAI models in past API requests, pointing towards an interest in reproducibility and control over the randomness in model outputs.


Skunkworks AI Discord

  • New Training Method Boost on the Horizon: baptistelqt is finalizing a paper on a new training method that enhances global accuracy and sample efficiency, evidenced by improved test accuracy on VGG16 from 0.04 to 0.1 in just one epoch. The method’s scalability to larger models is currently untested due to resource constraints.

  • Big Model Woes Meet Generosity: baptistelqt’s concerns about the need for more computational power to test their method on larger models were met with an offer from far_el. who expressed interest and proposed discussing resource allocation.

  • Team-Up for Thoughtful Language Models: satyum showed interest in the ā€œQuiet-STaRā€ project, which promotes the idea that language models can ā€œthink before they speak,ā€ with qualifications in PyTorch and transformer architectures welcomed for potential collaborators.

  • Off-Topic Diversions: Members also shared non-technical content, such as a YouTube video shared by pradeep1148, which does not contribute to the technical or engineering discussions.

[N.B.: The shared YouTube video from the #off-topic channel did not contain enough context to assess its relevance to the technical discussions.]


PART 2: Detailed by-Channel summaries and links

Stability.ai (Stable Diffusion) ā–· #announcements (1 messages):

  • Introducing Stable Video 3D: Stability.ai has released Stable Video 3D (SV3D), a generative model that is an advancement over the earlier Stable Video Diffusion. SV3D creates high-quality multi-view images from a single object image and can generate 3D meshes, surpassing previous models like Stable Zero123 and Zero123-XL. Learn more

  • Two Variants of SV3D Released: The release introduces two variants, SV3D_u and SV3D_p. SV3D_u generates orbital videos from single image inputs, while SV3D_p extends these capabilities, though the message is truncated before further details are provided.

Link mentioned: Introducing Stable Video 3D: Quality Novel View Synthesis and 3D Generation from Single Images — Stability AI: When we released Stable Video Diffusion, we highlighted the versatility of our video model across various applications. Building upon this foundation, we are excited to release Stable Video 3D. This n…


Stability.ai (Stable Diffusion) ā–· #general-chat (988 messagesšŸ”„šŸ”„šŸ”„):

  • Stability AI News and SD3 Anticipation: Users are eagerly waiting for SD3 invites to be distributed and shared their predictions about the potential release in April. Discussions centered on updates including the announcement of Stable Video 3D, previous models, and partnerships with blockchain companies, as mentioned by stability.ai/news.

  • Technical Challenges and Fixes in SD: Users discussed technical issues with running different versions of AI models, mentioning inefficient code, specific needs such as AVX instruction sets, and dissatisfaction with certain model behaviors. They also shared experiences with running models like Stable Cascade and SDXL on consumer-grade hardware.

  • Pickle, PT Files, and SAFETENSORS: Users sought assistance with converting .pt files to SAFETENSORS. It was mentioned that most .pt files are safe to use and linked to official Python documentation warning about the security risks associated with Pikcle, a method of object serialization in Python.

  • Concerns about Commercial Use and Open Source Licensing: There was a discussion about the licenses associated with various models, especially the right to use them commercially. Some users expressed indifference to open-source licenses, implying that if something is open-sourced, it no longer belongs to the creator.

  • Generating AI Art and Running on Low-End PCs: Users inquired about how to use Stable Diffusion for specific tasks like coloring line art without altering style and how to run AI models on PCs with low VRAM. Recommendations included using specific controlnets and trying services like civitai for cloud-based generation.

Links mentioned:


Perplexity AI ā–· #announcements (1 messages):

  • Unlimited Queries for Perplexity Pro Users: Perplexity has made the number of daily queries on Claude 3 Opus unlimited for Perplexity Pro users. Celebrate the enhanced access to what’s claimed to be the best LLM in the market today!

Perplexity AI ā–· #general (795 messagesšŸ”„šŸ”„šŸ”„):

  • Clarifications Requested on Perplexity’s Offerings: Users are asking for clarifications regarding Perplexity AI’s context limits for Opus, with specific concerns about how PDFs are interpreted and how to ensure the model adheres to prompts for concise responses.
  • Opinion Split on ā€œUnlimitedā€ Claims: Discussion involves whether the ā€œunlimitedā€ usage claimed by Perplexity is misleading. While some users find the term misleading due to the daily cap, others argue the limit is virtually unreachable for most users.
  • AI Models in Education: There’s a lively debate about the capability of AI like Claude 3 Opus in explaining complex topics in simple terms suitable for children, with anecdotes from parents actively using AI as an educational tool for their children.
  • Perplexity Business and Legal Discussions: Amidst user discussions, concerns are raised about the potential legal implications of misleading wording on usage limits, and there’s speculation about Perplexity’s growth and comparisons with competitors like Gemini Pro 1.5.
  • Parenting Experiences Shared: Users exchange stories and opinions on parenting and children’s learning capabilities, with mentions of using features like Perplexity’s Models and ChatGPT Pro with voice interfaces to nurture curiosity and understanding in kids.

Links mentioned:


Perplexity AI ā–· #sharing (35 messagesšŸ”„):

  • Creative Writing Exploration with Claude 3 Opus: Claude 3 Opus was tested on creative writing with the theme of ā€œever increasing intelligence until it’s unintelligible to humansā€. The exploration can be found here.
  • Visibility Matters: A reminder was issued to ensure threads are shared, using this Discord link for guidance on making content visible to others.
  • Perplexity AI in Action: Members shared their Perplexity AI searches on diverse topics varying from vulnerability management to creative invention prompts. Highlighted searches included assessing cleanliness here and exploring vulnerability management here.
  • Diverse Explorations with Perplexity AI: Participants engaged with Perplexity AI searching for insights on subjects such as the vulnerability of phones, Elon Musk, and even the physics of lava lamps. For instance, one search compared phones here.
  • Delving into Specific Technologies: Users delved into specifics with searches relevant to current tech, including topics on Apple investments and open-source releases. One example is looking into Apple’s acquisition of DarwinAI here.

Perplexity AI ā–· #pplx-api (64 messagesšŸ”„šŸ”„):

  • Unclear Model Deprecation Schedule: A member mentioned an expected deprecation of a model on March 15, but noted it was still active and providing distinct responses without rerouting to another model; there was uncertainty if the deprecation schedule had changed.
  • Discrepancies in API Responses: Members discussed inconsistencies when using sonar-medium-online for news prompts; one user received different sets of information about Donald Trump via the API versus the web-browser version.
  • Exploring Job Search with API: Members are experimenting with API prompts to retrieve job postings and are receiving mixed results, including both specific job links and generic job search site URLs.
  • Max Tokens Impact on Responses: A user inquired whether setting maxtokens would cause the model to provide incomplete responses; discussion pointed out that the model doesn’t seem to have ā€˜awareness’ of token limits, and that it may truncate long responses or not fill the limit if set too high.
  • Queries About Possible Grok Integration and Image Vision Support: There were questions about whether Perplexity will support the Grok now that it’s open-source, and which API model supports image vision equivalent to GPT 4 Vision; members provided updates and clarified current model capabilities.

Links mentioned:

  • pplx-api: no description found
  • pplx-api form: Turn data collection into an experience with Typeform. Create beautiful online forms, surveys, quizzes, and so much more. Try it for FREE.

Unsloth AI (Daniel Han) ā–· #general (853 messagesšŸ”„šŸ”„šŸ”„):

  • Unsloth AI Discord Discusses Grok-1: Members discussed the new Grok-1 model released by xAI, questioning its usefulness given its size of 314 billion parameters, which is so large that it requires multiple high-end GPUs to even attempt inference.
  • Quantization Queries and AIKIT Integration: One user shared the addition of finetuning with unsloth as an integration into AIKIT, allowing deployment and fine-tuning of large language models with a configuration file and Docker. Another is seeking resources to understand quantization basics, like bits and the significance of 4-bit quantization.
  • Concerns Over Impersonation on Discord: There’s an alert about a scam account impersonating a user named Daniel (starsupernova), with suggestions to report the impersonator for fraudulent activities.
  • Fine-tuning Discussions and Resources Shared: The community exchanged information on fine-tuning LLMs, with notebooks shared to help structure datasets for tasks like creating an instruct model and the suitable VRAM required for models like Mistral and Gemma.
  • Grok-1 Performance and Architectural Decisions questioned: It was mentioned that Grok-1 doesn’t significantly outperform Llama 70B, and there’s interest in why such a large model was released, suggesting marketing or other motives by Elon Musk and his team.

Links mentioned:


Unsloth AI (Daniel Han) ā–· #announcements (1 messages):

  • Unsloth AI Basking in GitHub Stardom: Unsloth AI is trending on GitHub this week, heralding a wave of support and a call to action for users to star the project. The project promises 2-5X faster performance and 70% less memory usage for QLoRA & LoRA finetuning. Unsloth on GitHub

Link mentioned: GitHub - unslothai/unsloth: 2-5X faster 70% less memory QLoRA & LoRA finetuning: 2-5X faster 70% less memory QLoRA & LoRA finetuning - unslothai/unsloth


Unsloth AI (Daniel Han) ā–· #random (25 messagesšŸ”„):

  • Serendipitous Username Encounter: A member shared an instance where they considered adopting the moniker TSAR bomba, which amusingly coincided with another user’s username, prompting discussion about such coincidences.
  • Childlike Intuition in the Digital Age: A discussion on how children are adept at absorbing information due to the lack of internal chatter, unlike adults who are distracted by digital devices.
  • Embracing Poetic Creativity: A user shared a poem titled ā€œAn Appeal to A Monkey,ā€ invoking the idea of learning from the simplicity and aliveness of monkeys in a modern complex world.
  • AI Model Fine-tuning Dialogue: Conversations about the merits and performance of Mistral-7b versus Gemma 7b for domain-specific classification tasks, including updates on bug fixes and the relative strength of the models.
  • Resource Sharing and Exploration: Users shared links to various projects, such as a GitHub pull request for a ā€œMixtral branchā€, a visualization of Pokemon RL agents on a shared map, and open-source UI elements at Uiverse.io.

Unsloth AI (Daniel Han) ā–· #help (568 messagesšŸ”„šŸ”„šŸ”„):

  • Training Complexities and Evaluating Mistral: A member experimented with models like Mistral-7B Instruct v0.2 and encountered issues with the Huggingface Inference API not working. Another member suggested making sure that 4-bit quantization was adopted for inference and clarified the process of using Unsloth’s colab notebook and eval_steps parameter.

  • Unsloth Parameters for GGUF Conversion: One member discussed issues with converting Gemma model to GGUF and received advice to execute a specific Python conversion command outside of the provided cell in the notebook.

  • Saving and Conversion Problems: A user attempting to run the ChatML notebook with a Gemma model encountered errors during the saving process. They received assistance with alternative commands to manually run the conversion in Kaggle or local environments.

  • Support for Full Fine-Tuning Discussed: Users inquired about the capabilities of Unsloth for full fine-tuning, with the understanding that it currently excels at LoRA and QLoRA. The Unsloth team explained that while full fine-tuning might be supported in the future, it is not currently the focus of Unsloth and manual editing of the codebase would be necessary.

  • Deployment and Fine-Tuning Queries: Discussion also included guidance on deploying models with Unsloth, focusing on those that are part of the open-source segment (e.g., Mistral) rather than proprietary ones like OpenAI’s GPT-4. Further, there was a clarification regarding the impact of eval_steps on finetuning outcomes, confirming it only dictates the frequency of evaluation, not the final finetuning performance.

Links mentioned:


Unsloth AI (Daniel Han) ā–· #suggestions (21 messagesšŸ”„):

  • Quick Reads on AI: A paper caught members’ attention and was acknowledged as ā€œAnother amazing paperā€ which would be added to a reading list.
  • Fine-tuning Epochs Debate: Yahir9023 and Flail_ discussed optimal epochs for model training; while Yahir9023 considered more epochs for voice neural networks, Flail_ suggested that generally for LLMs, ā€œ3 epochs is the standard.ā€
  • Learning and Style in LLMs: Flail_ mentioned that fine-tuning LLMs tends to affect their style more than their knowledge, and cautioned that more epochs could cause the LLM to overfit to specific data.
  • Sizing Up for Knowledge: Flail_ advised that the number of trainable parameters should match the amount of data, recommending a rank of 32 or 64 for 800,000 lines.
  • Slim AI Models on Showcase: Yahir9023 shared links to smaller models, like Tiny Mistral, that could be integrated into the Unsloth Repo, but faced mixed reactions from the community.

LM Studio ā–· #šŸ’¬-general (301 messagesšŸ”„šŸ”„):

  • Exploring Model Performance on Local Hardware: Members discussed which models to run on local machines with given hardware specs, like an M3 Pro with 18GB memory. Various model suggestions included CodeLlama, DeepSeek for coding, and TinyDolphin and WizardVicuna for other tasks.

  • Troubleshooting and Tips for Using LM Studio: Some users faced validating file integrity loops and issues with multiple GPU setups, while others shared tips, like using a local IP address instead of localhost when using Docker to deploy a web UI.

  • Inquiry about Local Model Functionality: One member asked if LM Studio could support features like face-swapping videos or reading files from a folder. The consensus was that these features are not currently supported within LM Studio, and alternatives were suggested.

  • Discussions on Model Capabilities and Hardware Requirements: Users engaged in extensive discussion about the capabilities of different graphics cards, with Tesla K40 and K80 as noteworthy mentions. Queries on running larger models like a 34GB LLM with different configurations of GPUs were addressed, highlighting the need for substantial VRAM.

  • Debates over the Utility of Open-Sourcing Large Models: The open-sourcing of a 316B parameter model called Grok sparked debate on its practicality for local use, given its enormous size. Some users expressed skepticism about the ability to run such large models on typical hardware setups.

Links mentioned:


LM Studio ā–· #šŸ¤–-models-discussion-chat (138 messagesšŸ”„šŸ”„):

<ul>
  <li><strong>Command-R Model Awaits Merge for LM Studio:</strong> A member mentioned that support for the Command-R 35B model in LM Studio is imminent, pending the merge of <a href="https://github.com/ggerganov/llama.cpp/pull/6033">llama.cpp pull request #6033</a> and an update to LM Studio. Once merged, CohereAI/Command-R should work.</li>
  <li><strong>Seeking Model Recommendations for Local Use:</strong> Various members discussed searching for appropriate models to run locally based on individual system constraints. A productive source for these inquiries includes visiting <a href="https://www.reddit.com/r/LocalLLaMA/">reddit's LocalLLaMA community</a> for insights.</li>
  <li><strong>Yi-9B-200K is a New Base Model:</strong> Clarification was provided that Yi-9B-200K operates with a 200k context limit and stems from a new base model series, separate from the Llama models. An extensive resource with more information can be found on its <a href="https://huggingface.co/01-ai/Yi-9B-200K">Hugging Face model card</a>.</li>
  <li><strong>Grok-1 Release Spurs Debate:</strong> Discussion about xAI's release of the Grok-1 model, a 314B parameter Mixture-of-Experts that's not fine-tuned for any specific task, reveals skepticism regarding its immediate utility without further tuning. Details about the Grok-1 base model release can be read on the <a href="https://x.ai/blog/grok-os">xAI blog</a> and the model's raw weights are shared under the Apache 2.0 license.</li>
  <li><strong>Running LLMs with Limited Hardware:</strong> Members exchanged advice for operating language models with GPUs that have restricted capacity, such as the Nvidia 1660 Super with 6GB VRAM. One suggestion included running smaller models like Gemma 2b, often requiring operational compromises or hardware adjustments.</li>
</ul>

Links mentioned:


LM Studio ā–· #🧠-feedback (12 messagesšŸ”„):

  • Command-R 35B Compatibility Issue: A member was corrected about the compatibility of the Command-R 35B v1.0 model with llama.cpp, stating llama.cpp doesn’t support c4ai yet. Despite the confusion stemming from a file list reading ā€œAdded llama.cpp GGUF for Cohere’s Command-R model,ā€ the issue is unresolved.

  • Linux AMD Users Need Love Too: A member suggested that the Linux version download page should note that AMD users need OpenCL drivers to use the GPU with the program, indicating a small but helpful improvement for user guidance.

  • LM Studio’s Plugin Support Question: There was a query about whether LM Studio allows users to chat with their own documents or add plugins like autogen. A member clarified that plugins are indeed supported and advised turning on server mode to connect in this way.

Link mentioned: andrewcanis/c4ai-command-r-v01-GGUF Ā· Hugging Face: no description found


LM Studio ā–· #šŸŽ›-hardware-discussion (480 messagesšŸ”„šŸ”„šŸ”„):

  • GPU Dreams and Budget Realities: Members discussed the differences between running local Language Models (LMs) on various GPUs, including speculations about Tesla K80 performance vs. modern cards. While suggestions like second-hand 3090s were proposed, the allure of affordability steered some towards purchasing K80s despite potential limitations and cooling challenges.

  • Benchmarking Woes: The community talked about the difficulty in finding comprehensive LM benchmarking data across different hardware setups. Ideas were floated about having a centralized resource or leveraging LM Studio for standardized benchmarks, highlighting the wide-ranging variable performance results from others’ personal builds.

  • Epic Epyc Adventures: A detailed exploration into building a multi-GPU setup ensued, with members discussing the pros and cons of AMD Ryzen vs. Epyc CPUs for supporting multiple NVLinked GPUs. The high cost and performance payoff of moving to Epyc for better PCIe lane support was a central concern, indicating the serious investment required to run large models locally.

  • MacBooks in the Mix: In contrast to the custom rig discussions, some expressed satisfaction with using powerful MacBook models for running LMs. The MacBook’s ease of use, portability and performance offered a simpler albeit more expensive alternative to custom builds.

  • Past Purchases and Future Plans: Members shared their plans to procure components like used Teslas from eBay, discussing the particulars of Epyc CPU installations and the need for a torque screwdriver. Amidst the technical talk, humorous anecdotes and camaraderie were shared about the highs and lows of assembling a dream LM-running machine.

Links mentioned:


LM Studio ā–· #🧪-beta-releases-chat (4 messages):

  • Seeking Presets for Models: A user asked if there’s a list of presets available for different models. They were directed to a GitHub repository of LM Studio JSON configuration files, which includes a collection of example config files.

  • ROCm User Community Call: A user inquired about others using ROCm. Another user advised them to post their query in a specific channel for better visibility.

Link mentioned: GitHub - lmstudio-ai/configs: LM Studio JSON configuration file format and a collection of example config files.: LM Studio JSON configuration file format and a collection of example config files. - lmstudio-ai/configs


LM Studio ā–· #langchain (1 messages):

  • Inquiry about JSON Functionality: A member asked if anyone had successfully implemented a model with JSON function calling on the Local Inference Server. No further context or successful attempts were discussed.

LM Studio ā–· #avx-beta (5 messages):

  • AVX Beta Version Clarified: A member asked for clarification, wondering if the beta app using AVX is the same as the main application. It was confirmed that it’s an older version, and AVX support is not seen as a high-priority feature.
  • Model Compatibility in Beta: In response to an inquiry about model functionality, a member stated that while models will work in the beta, newer ones like starcoder2, gemma are not supported.
  • Running Mistral on Beta: A clarification was made that despite the limitations, Mistral can be run on the AVX beta version of the app.

LM Studio ā–· #amd-rocm-tech-preview (5 messages):

  • Prebuilt Windows ROCm Libraries for AMD available: A link to GitHub was shared for prebuilt Windows ROCm libraries, specifically designed for gfx1031 and gfx1032 graphics cards. The library can be found at GitHub - brknsoul/ROCmLibs.
  • Exploring Multiple GPU Support in LM Studio: A user expressed interest in using multiple AMD GPUs, particularly a 6700 xt in tandem with a 7800 xt, with LM Studio and inquired about future support for such configurations.
  • Compatibility Issues with AMD 6700 xt and ROCm: A response clarified that the AMD 6700 xt is not officially supported by AMD for ROCm, which is why LM Studio, relying on unmodified ROCm libraries, does not support the 6700 xt.
  • Potential for Multi-GPU utilization in LM Studio: Another user suggested that possessing another 7000 series GPU would likely allow LM Studio to use the GPUs in parallel, implying current multi-GPU support with certain hardware.

Link mentioned: GitHub - brknsoul/ROCmLibs: Prebuild Windows ROCM Libs for gfx1031 and gfx1032: Prebuild Windows ROCM Libs for gfx1031 and gfx1032 - brknsoul/ROCmLibs


LM Studio ā–· #crew-ai (1 messages):

  • Choosing the Right Agent System: A member inquired about the progress on selecting an agent system for deepening and validating a creative concept, expressing shared interest in this decision-making process. No specific systems or criteria were mentioned in their query.

Nous Research AI ā–· #off-topic (56 messagesšŸ”„šŸ”„):

  • NVIDIA’s Next-Gen Memory Speed: NVIDIA’s upcoming GeForce RTX 50-series ā€œBlackwellā€ is rumored to feature GDDR7 memory at 28 Gbps, despite GDDR7’s capability for 32 Gbps. This choice follows a pattern seen with the RTX 20-series where they didn’t use the fastest available memory speed either.
  • Discord Users Explore AI Agents and Interfaces: There’s a rich conversation about the expectation of advanced AI models and interfaces, specifically discussing how OpenAI may be integrating custom AI models within agent interfaces—a strategy aimed at improving reliability and the overall functional robustness of AI systems.
  • Future AI Predictions Resonate: Users reference Sam Altman’s predictions from 2021, noting the accuracy of his timeframes for AI technology progress. The conversation highlights that AI companions, initially anticipated for 2030+, are becoming a reality earlier than expected.
  • Community Reacts to AI Industry Figures: Members express frustration with what they see as idle speculation and lack of action from high-profile AI industry figures. The chat reflects on the gap between the grand vision of AI’s future and the practical issues needing attention in the present.
  • Interactive AI Assistant Development Discussed: Participants share ideas and resources for programming an AI assistant that pauses intelligently when interrupted, with suggestions around editing conversation history and context manipulation leading the discussion. The conversation indicates a communal interest in refining the interactivity of AI systems.

Links mentioned:


Nous Research AI ā–· #interesting-links (16 messagesšŸ”„):

  • Horny Claudes Generate Better Mermaid Diagrams: A chatbot referred to as Repligate suggests that a network of ā€˜horny claudes’ seems to produce more effective mermaid diagrams. This peculiar tactic was shared alongside some explicit samples.

  • The Contagious Charm Offensive: A discussion arose around the concept of ā€˜rizzing up’ models to receive better answers, with one comment suggesting applying charm to enhance performance.

  • Clone Wars: Members discuss the ethics and implications of replicating features from AI models, with one calling it ā€œreverse Sydneyā€ in reference to cloning model behaviors.

  • ORPO for Language Model Preference Alignment: Huggingface published a paper on a model-free monolithic odds ratio preference optimization algorithm, ORPO. It proposes a way to fine-tune language models like Phi-2 and Llama-2 for more aligned outputs without an additional preference alignment phase.

  • Open Source Intelligence War Heats Up: A detailed overview was shared about Elon Musk’s potential disruption of the AI landscape with Grok, a 314B parameter model. The discussion includes speculation about the performance of various models, including OpenAI’s GPT series, and industry dynamics shaped by Musk’s strategic maneuvers.

Links mentioned:


Nous Research AI ā–· #general (656 messagesšŸ”„šŸ”„šŸ”„):

  • Hermes 2.5 Still Reigns: In the Nous Research discussions, there’s acknowledgment that despite Grok-1’s massive 314-billion-parameter size, it barely outperforms GPT-3.5, while some 7-billion parameter models surpass GPT-3.5’s capabilities. Grok-1’s utility is questioned, with speculation over its necessity when LLaMA and smaller models serve well.
  • Yi Licensing Clears Up: Discussions hint at Yi-9B’s license now allowing commercial use with what seems to be an automated approval process. However, there’s skepticism, with some thinking this auto-approval could just be a marketing tactic.
  • Jensen Huang’s Accidental Leak?: NVIDIA CEO Jensen Huang’s GTC keynote possibly confirmed rumors about GPT-4’s architecture and scale, mentioning 1.8 trillion parameters and a Mixture-of-Experts (MoE) setup, which OpenAI has not officially confirmed.
  • Continual Pretraining Debates: The channel has conversations on the feasibility, benefits, and difficulties of continually pretraining models, particularly MoE models like Mixtral. While deemed useful for low-resource languages, there’s concurrence that without a high-quality data set, this approach could degrade overall model performance.
  • Open Source vs Commercial Giants: There is recognition that open-source efforts in large language models (LLMs) may be stifled if well-funded labs continue to keep their models and methods restricted. Some envision a future where each state or large entity maintains their own LLM tailored to their constitutional values, with open-source models serving as valuable bases for continual development.

Links mentioned:


Nous Research AI ā–· #ask-about-llms (25 messagesšŸ”„):

  • Perplexing Perplexity Issue: A member is struggling with calculating the perplexity for Llama2 and gets a perplexity (ppl) of 90.3 using a Kaggle notebook based on the HF guide. They are looking for insights from the community to solve this issue.

  • High Aspirations, High Costs: Aspiring to see a 20b base model surpass Mistral, a member explores the idea and is sobered by the mention of a potential $300,000 cost for such an endeavor.

  • Scaling Dreams Dashed: Practical discussions arise about upscaling models, specifically llama-2 13b to 20b. Members express skepticism regarding the effectiveness of upscaled models, and the costs associated with continued pretraining are estimated.

  • Exploring Model Merging: The conversation touches on the potential in surveying model merging techniques in the quest for achieving significant results with a 20b-sized AI model.

  • Pruning Away: A member shares their work on downscaling models with continuous pretraining and provides a detailed comparison between the Mistral-7B-v0.1 and their pruned version dubbed Smallstral.

Links mentioned:


Nous Research AI ā–· #bittensor-finetune-subnet (18 messagesšŸ”„):

  • Link Confusion Averted: A user inquired about a potentially broken link, but was quickly reassured by another member that the link is indeed functional.
  • Awe and Respect for an Idea: Member fullstack6209 expressed long-standing awe and admiration for an unspecified idea.
  • Bittensor Chain Troubles: Users discussed recent issues with the bittensor chain, noting that it was ā€œbrokedā€ and fixes were not promptly implemented.
  • Subtensor Update Required: It was mentioned that the bittensor chain is back up, but requires an update to subtensor that not all users had completed.
  • Getting Tao for Bittensor Registration: A user sought advice on where to acquire tao for registration, with the suggestion to use USDT at MEXC exchange. Follow-up discussion included the challenges of using Kucoin due to withdrawal issues and the hardware requirements for starting with a 3090 graphics card.

Nous Research AI ā–· #rag-dataset (100 messagesšŸ”„šŸ”„):

  • Debating the Future of RAG: Members discussed evolving requirements for RAG (Retrieval-Augmented Generation) models, centering on desirable features such as low-latency responses, context understanding, and knowledge diversity. One member provided a detailed wish list for RAG-model properties, encompassing markdown-like outputs and advanced reasoning capabilities.

  • Striking Balance in Model Responses: The conversation shifted to the dilemma of whether models should strictly use external context or also interpolate with their own knowledge. One opinion suggested that models should have a default mode of relying on their knowledge, with the flexibility to operate in a ā€œRAG modeā€ only using external contexts upon command.

  • Model Outputs Under Scrutiny: Members critique the need for models to have output requirements such as markdown formatting. It was argued that while structured outputs, like in-line citations, are useful, flexibility in response styles should be preserved.

  • Incorporating External Context Expertise: The group discussed the potential of Cohere’s model to improve RAG functionalities by providing span highlighting and citations, although it was acknowledged that many current models, like GPT-4, already possess almost perfect recall over long contexts.

  • RAG-enhanced Models on the Horizon?: Users explored the idea of creating specialized, smaller models to function within RAG pipelines, such as a ā€œrelevant info extractor,ā€ to handle complex information retrieval tasks more efficiently. These specialized models could serve as intermediaries in larger computational frameworks.

Link mentioned: scratchTHOUGHTS/commanDUH.py at main Ā· EveryOneIsGross/scratchTHOUGHTS: 2nd brain scratchmemory to avoid overrun errors with self. - EveryOneIsGross/scratchTHOUGHTS


Eleuther ā–· #general (273 messagesšŸ”„šŸ”„):

  • Surge in Open Course Access: Members expressed astonishment and appreciation for the availability of Ivy League courses online for free, noting that even non-students often watch lectures from prestigious institutions like MIT. In a shared enthusiasm, the ease of access to high-quality educational materials like Stanford’s CS videos on NLP and platforms hosting detailed course content was praised.

  • Exploring Unique Frameworks and AI Enhancements: Discussions arose about intriguing projects like the Devin AI software engineer, which boasts capabilities like browsing documentation and debugging code, and Figure 01, a robot developed by Figure with OpenAI showing promising results in natural language interaction. Meanwhile, a member noted the potential implementation of an AI ThoughtStream concept for models, drawing comparisons to Self-Taught Reasoner (STaR) as seen in this related paper.

  • Grok’s Release Generates Buzz: Discussion about Grok-1 with its release on GitHub was prevalent, with members trying to understand its position compared to existing large language models like Claude and GPT-4. Uncertainty about the specific tuning and comparisons to the model’s performance on benchmarks led to a debate around its pretraining process and size.

  • The Challenge of Evaluating AI Models: Members debated evaluation metrics for AI models, with a focus on EleutherAI’s cookbook for calculating TFLOPs and a suggestion to utilize newer sources of written English to avoid legal issues and maintain an up-to-date standard for benchmarking base models.

  • Spotlight on Conferences and Training Times: Queries were raised about affordable conferences for AI paper submissions, with TMLR being recommended as a free option. Simultaneously, a discussion on training times for large models like Pythia-70M on different GPUs used NVIDIA’s performance figures combined with a research paper to estimate hypothetical durations.

Links mentioned:


Eleuther ā–· #research (245 messagesšŸ”„šŸ”„):

  • Confusion Over Baseline Performance Discussed: Comparisons are made showing discrepancies between reported and expected performance values on benchmarks like GSM8k and CQA, indicating a lack of clarity on how baselines are evaluated. Probing questions highlight the need for optimal parameters and sampling methods during evaluation.

  • Speculation on Grok’s Training and Usage: There’s debate on the quality and training strategy behind the Grok model, with musings that Grok’s development was driven by Elon Musk’s directives and optimized for Twitter’s internal hardware specs. Analysts await independent finetuning and benchmarks for a clear assessment of its capabilities.

  • Efficiency of Speculative Sampling Questioned: A discussion on speculative sampling and its application to different model architectures such as Mamba and RNNs suggests that while it may save on memory bandwidth, the practicality and efficiency for certain architectures still raise questions.

  • Label Smoothing and Benchmarks Scrutinized: Queries about the current use of label smoothing in language model training lead into critical views on the reliability of benchmarks, emphasizing the need for better benchmarks and questioning the value of personnel decisions in model efficacy.

  • Scaling and Cost-Efficiency Considered for Large Models: Conversations about scaling LLMs focus on the cost of running models as substantial as Grok, drawing conjecture on the economic and technical decisions behind its development, and suggesting that alternative training and model choices might yield savings.

Links mentioned:


Eleuther ā–· #scaling-laws (11 messagesšŸ”„):

  • Language Model Sensitivity to Dataset Complexity: A member shared that language model scaling laws seem to be influenced by dataset complexity and syntactic properties. The use of gzip was highlighted as a good predictor for dataset-specific scaling behaviors.

  • Seeking Feedback on Scaling Laws Research: Preliminary results on the impact of data complexity on scaling laws have been shared, and comprehensive experiments are in progress. They are looking forward to using a specific package to derive more quantitative data and fitted scaling laws.

  • Clarification on Data Labels: A question arose regarding additional labels in some data visualization; they were clarified to be syntactic specifications of the Probabilistic Context-Free Grammar (PCFG) used to generate the dataset.

  • Concerns About Plot Readability: There was a discussion about the difficulty in interpreting a plot due to the scale being inappropriate for the smaller model, with acknowledgment and mention of alternative log scale plots.

  • Perplexity vs. Intrinsic Entropy Discourse: Members discussed how perplexity relates to the intrinsic entropy of datasets and how this impacts comparisons across datasets. It was suggested that an optimal range of lexical densities for data could result in more efficient pretraining practices.


Eleuther ā–· #interpretability-general (13 messagesšŸ”„):

  • Inquiring About Sampling Strings: One member asked if there’s a canonical way to sample strings from a pre-specified set of 1-gram, 2-gram, …, n-gram statistics on a vocabulary. The discussion moved towards understanding the relation of n-gram statistics to sampling.
  • Clarify the Hierarchical Nature of N-grams: Another member clarified that specifying the statistics for n-grams determines the statistics for all lower orders, pointing out the hierarchical nature of n-gram language models.
  • The Autoregressive Approach to Sampling: It was explained that sampling from the specified n-gram distribution can be done autoregressively, which results in the maximum entropy distribution matching those statistics.
  • Step-by-Step Sampling Technique Explained: The method involves first sampling from the unigram distribution for the first token, then sampling from the bigram conditional distribution for the second, and so on.
  • Reference to N-gram Model Implementation: For practical reference, a GitHub link to an implementation of this sampling process was provided, alongside a brief visit to the Wikipedia page on n-gram language models.

Links mentioned:


Eleuther ā–· #lm-thunderdome (31 messagesšŸ”„):

  • Newcomer’s Inquiry into lm-eval-harness Integration: A member new to lm-eval-harness asked about integrating it with their LLM model, specifically looking for a demo code for llama on gaudi2 using megatron deepspeed. They also inquired about function inheritance and the fixed format of dictionaries passed in argument, noting a lack of direct command line overrides.

  • Misleading Model Behavior in lm-eval-harness: A user reported an issue where when specifying a model to use in lm-eval, it defaulted to gpt-2-small even when different models were indicated. Upon investigation, it was discovered that the presence of dual model_args in the command led to the harness defaulting to gpt-2-small.

  • Inconsistencies in Reported Scores on the Open LLM Leaderboard: A member questioned the accuracy of the reported number on the open LLM leaderboard for llama2-70b MMLU score, which differed from their own experimental results. Clarification was offered that the discrepancy could be due to differing methods of averaging over MMLU subtasks.

  • Request for Apple’s New Multimodal Model Integration: A member suggested integrating Apple’s new MM1 multimodal model into the lm-evaluation-harness to appear on the Open LLM Leaderboard.

  • Evaluation Deadlocks and Workarounds in lm-evaluation-harness: Users discussed an issue on GitHub (#1485) concerning a deadlock in the lm-evaluation-harness. A workaround involving avoiding concurrent dataset access was mentioned, along with a possible connection to multiprocessing in the evaluation harness’ code.

  • Location of Downloaded Models in lm-harness: Users asked and shared information about the location of model downloads by lm-eval, which is typically in the Huggingface cache directory and controlled by environment variables such as HF_HOME, TRANSFORMERS_CACHE, and HF_DATASETS_CACHE.

  • Release of lm-eval v0.4.2 and Help with PR Reviews: It was announced that version 0.4.2 of lm-eval is now available on PyPI, with a call for more contributions and assistance in PR reviews. Users are encouraged to ping for pending PR reviews that need attention.

  • Discussion on Representing Translated Tasks in lm-eval-harness: A conversation took place regarding the incorporation of Multilingual evals translated via machine translation into lm-eval-harness. It was proposed that a possible method would be collecting such tasks under a directory and clearly tagging them as translations.

  • Explanation of likelihood vs. likelihood_rolling in lm_harness: A distinction was made between likelihood and likelihood_rolling within lm_harness. loglikelihood_rolling is for unconditional loglikelihood computations, while loglikelihood is for conditional loglikelihood of a string given another string, and can be used for multiple-choice evaluations.

  • Stride Method and Context Length Considerations in Perplexity Evaluation: It was noted that the lm-evaluation-harness calculates perplexity using non-overlapping sliding windows, and an enhancement to expose the option for a sliding window with a custom stride is planned. It was also mentioned that larger context lengths in models reduce the effect of the stride choice.

Links mentioned:


Eleuther ā–· #gpt-neox-dev (3 messages):

  • Clarity on The Pile’s Shuffling Status: A member inquired about the shuffling status of The Pile dataset. Another member clarified that the original files distributed were not shuffled, but the preprocessed and pretokenized data on HF is ready-to-use and was utilized by Pythia.

  • Shuffling Detail for Pile Components: It was added that individual components of The Pile are definitely not shuffled, with some organized chronologically. However, the original train/test/validation split is expected to be shuffled to ensure a good mix throughout the dataset.


OpenAI ā–· #ai-discussions (193 messagesšŸ”„šŸ”„):

  • Understanding AI’s ā€œUnderstandingā€ of Language: Members discussed whether AI truly comprehends language or merely predicts the next word without understanding. The conversation covered emergent properties of AI, comparisons to human cognition, and speculation about AI’s potential to develop sensory fidelity comparable to humans.

  • AI and the Concept of Consciousness: The debate continued with thoughts on consciousness, with some suggesting that much of human cognition is about reading and inducing emotions, a task that requires less effort when interacting with AI, as it doesn’t compete for social status or wealth.

  • DALL-E 3 Praised for Precision: A user praised DALL-E 3 for its precise response to detailed prompts and its ability to respect negative prompts, stating that OpenAI has done stellar work with the model.

  • Claude vs GPT-4 Performance: Users compared the newer Claude-3 with GPT-4, noting that Claude may be the better conversationalist, while GPT-4 still performs well in certain tasks like programming. There was a call for OpenAI to release newer models or options to revert to earlier GPT-4 versions.

  • Accessibility to Sora and AI Support Issues: A member inquired about access to Sora, and solbus provided a link to the current accessibility status. There were also grievances expressed about difficulties contacting OpenAI support, with advice offered on navigating the help site and raising tickets.

Link mentioned: Enterprise privacy: no description found


OpenAI ā–· #gpt-4-discussions (34 messagesšŸ”„):

  • Enquiring Minds Want to Know About GPT-5: Multiple members have asked about the release date of GPT-5, showing curiosity about the future of OpenAI’s language models.
  • Integration of Web Searching in GPT API: A member questions how to add web searching capabilities into the GPT API, similar to those in ChatGPT-4.
  • Seeking Bot-Building Assistance on Mobile: A user expressed the desire to create an OpenAI chatbot on their phone and customize it, particularly for Discord, and sought help using BotGhost and OpenAI.
  • Locating Elements in Playwright: A member discussed issues with ChatGPT Turbo 3.5, which was not adhering to their method preference for locating elements in Playwright tests.
  • Technical Troubles and Reporting: Users have noted technical problems with GPT responses, including inability to prompt responses and perpetual loading. A member inquired about where to report a bug and reach the Chat GPT authors, with responses directing them toward specific channels and help.openai.com.
  • Concerns on GPT Filter Sensitivity: Users urge for a revision of GPT’s content filters, suggesting they are too restrictive and hinder creative writing by flagging content commonly seen in children’s cartoons.

OpenAI ā–· #prompt-engineering (79 messagesšŸ”„šŸ”„):

  • Prompt Engineering Challenges for Classification: A member is searching for a methodological way to test various prompt architectures for a classification use case. They are experimenting with the amount of contextual information within the prompt to improve recall and minimize false positives, considering the limits of the total context window suggested by another user.

  • Playwright Locator Issues with GPT-3.5 Turbo: There are discussions concerning GPT-3.5 Turbo’s failure to produce suitable Playwright test code, specifically it not using the correct locator syntax. Alternatives such as GPT-4 were suggested for improved output.

  • Frustrations with Refusals to Perform Tasks: Several members expressed frustration with the model refusing tasks, seeking advice on how to encounter fewer denials. Members shared techniques like meta-prompting to navigate around the AI’s self-moderation that leads to task refusal, and requested examples where issues persist to offer concrete help.

  • Struggling with Changes in ChatGPT’s Response Patterns: Users notice a change in how ChatGPT’s algorithm handles tasks, with increasingly aggressive bias minimization leading to more frequent ā€œI can’t do thatā€ responses. There were discussions about how the model may develop a refusal stance that persists through a conversation, impacting workflows.

  • Guidance on Utilizing GPT Web Search with Multiple Queries: A member sought advice on how to prompt GPT to use web search with multiple queries to access a wide array of sources and gather more comprehensive information. The conversation highlighted the difference between guiding the model to search for multiple sources versus the user directly controlling the specific queries being made.


OpenAI ā–· #api-discussions (79 messagesšŸ”„šŸ”„):

  • Prompt Optimization Challenges: A member is experimenting with OpenAI for a classification use case, trying different prompt architectures to minimize false positives while maintaining recall. They seek a methodological way to test different architectures and are considering using a custom GPT.

  • Model Selection for Playwright Tests: A user struggles to get GPT-3.5 turbo to output executable code for Playwright tests and is inquiring if GPT-4 might be more adept with the latest libraries. They’ve discussed attempting to correct code format across multiple API calls without success.

  • Dealing with AI Task Refusals: Members exchange frustrations and advice on how to handle the AI’s refusal to complete certain tasks. Examples and strategies vary, from making edits and using meta-prompting to accepting the AI’s limitations and starting new conversations.

  • Shifting AI Behavior Over Time: There’s a discussion about noticeable changes in ChatGPT’s behavior, particularly an increased tendency to refuse tasks that it previously would complete. Users are adapting with clarification prompts and the ā€œFive Why’sā€ technique to navigate the model’s evolving content policy.

  • Harnessing GPT for Multiple Web Searches: Members debate on how to best prompt GPT to use web search for gathering information using multiple queries for a more comprehensive search result. Despite some confusion, an example approach is shared to guide GPT for multiple sources by crafting detailed and directive prompts.


HuggingFace ā–· #general (96 messagesšŸ”„šŸ”„):

  • Cross-Encoder Multi-GPU Tuning Inquiry: A member asked for guidance on fine-tuning a cross encoder model using multiple GPUs, wondering about specific parameter modifications required to accommodate the hardware change.

  • Aya Demo Enhanced: The Aya demo has been improved with a very high repetition penalty thanks to a community contribution. There’s a call for contributors to implement a slider in the gradio interface.

  • NVIDIA’s Powerful New GPUs: Discussions revolve around NVIDIA’s new H100 GPU, its power consumption (up to 850W), and long lead times for purchase. Memorable points include the combined server CPU and GPU on the same board, direct chip liquid cooling, and the allocation of rack power for modern supercomputers.

  • Data Hosting Leaderboard on HuggingFace: A member shared a leaderboard they created, indicating the amount of data hosted on HuggingFace repositories, showcasing the platform’s significant model and data hosting capacity.

  • Grok-1 Model Emerges from xAI: Grok-1, a 314 billion parameter Mixture-of-Experts model, is discussed after its release, with marvel over its size and the rapidity with which it was uploaded to HuggingFace by a user following its release under Apache 2.0 license.

Links mentioned:


HuggingFace ā–· #today-im-learning (12 messagesšŸ”„):

  • Exploring Optimization Techniques: A member shared their findings on GridSearch, RandomSearch, and Bayesian Optimization, admitting confusion specifically about the application of Bayesian Optimization.
  • Hugging Face 101 Required: A member is seeking assistance on how to use Hugging Face and requested a basic explanation of what it is.
  • AI Duets Hit a Sour Note: A newcomer in AI music generation struggles with creating harmonious duets and seeks advice on how others make covers with multiple artists, such as duets involving Harry Styles and Taylor Swift, without the AI-generated voices sounding ā€œstrangled.ā€
  • MLOps with Hugging Face & SageMaker Pipelines: User shares a link to a workshop notebook (Workshop 3: MLOps: End-to-End Hugging Face Transformers with the Hub & SageMaker Pipelines) explaining how to create an end-to-end MLOps pipeline with Hugging Face Inference DLCs and Amazon SageMaker.
  • Troubleshooting Model Access: A member needs assistance accessing a Hugging Face Model (Mistral-7B-Instruct-v0.2.GGUF) after encountering a 404 error, indicating that the specified repository is not found or accessible on the Hugging Face model hub.

Links mentioned:


HuggingFace ā–· #reading-group (12 messagesšŸ”„):

  • Linguistic and ML Fusion Fascinates: Members discussed the impressive nature of combining linguistics and machine learning, particularly in models dealing with both English and Chinese, considered highly distinct languages.
  • Delving into Medusa’s Potential: The paper on Medusa was shared for its novel approach to LLM inference by using multiple decoding heads to predict several tokens in parallel, which could theoretically create more efficient models.
  • Design Lessons in Multimodal LLMs: An abstract for a paper was posted, highlighting design insights for Multimodal Large Language Models, with key findings on architecture components and data choices essential for state-of-the-art results.
  • The Intricacies of Dominant Languages in LLMs: A member speculated that heavy skewing toward English in corpora could inadvertently lead to models replicating English or European patterns in language and thought.
  • The Penetration of LLMs in Peer Reviews: A study from arXiv was cited showing that between 6.5% and 16.9% of text in AI conference peer reviews might have been significantly modified by LLMs, revealing certain behavioral trends among reviewers.

Links mentioned:


HuggingFace ā–· #NLP (18 messagesšŸ”„):

  • Seeking NL2SQL Pipeline Enhancement: A member is working on a NL2SQL pipeline using BAAI/llm-embedder for embeddings, TheBloke/nsql-llama-2-7B-GGUF for NL2SQL model, and FAISS for the vector store, asking for recommendations on improving the accuracy of the embedding model and NL2SQL model for their application.

  • Grace Hopper Superchip Discussion Initiated: An announcement about the NVIDIA Grace Hopper Superchip was made, highlighting its potential for high-performance computing (HPC), AI, and data center applications, but it was not followed up with further details or related discussion.

  • Guidance Requested on NLP Learning Resources:

  • Looking for a Conformer ASR Training Tutorial: A member asked for a tutorial on training a conformer model for automatic speech recognition (ASR), but no recommendations or links were provided in the conversation thread.

  • Inquiry for Deployable Free LLM API:

    • A member inquired about any free LLM APIs suitable for deploying on production, and was directed to try ollama, but there was no follow-up confirming the suitability or providing additional options.

Link mentioned: Introduction - Hugging Face NLP Course: no description found


LlamaIndex ā–· #blog (7 messages):

  • Reimagining Retrieval Augmentation: A tweet introduced a novel approach to handle complex queries in a RAG pipeline, suggesting that documents should be treated as interactive tools rather than static text for more dynamic interactions. The concept aims to provide more intricate responses during retrievals and is discussed further here.
  • Instrumentation Tools Level Up: LlamaIndex released version 0.10.20, featuring a new Instrumentation module designed to enhance observability within RAG pipelines. Users can check out examples of basic observability and API call observations in linked notebooks detailed in the tweet available here.
  • Advanced QA with Search-in-the-Chain: The team highlighted a paper by Shicheng Xu et al. proposing ā€œSearch-in-the-Chainā€ as an evolution to RAG for question-answering, where retrieval is interleaved with planning and can prompt replanning if necessary. More details on this innovative method can be found here.
  • Navigate Job Market with RAG and LlamaParse: Kyosuke Morita’s blog post demonstrates how combining LlamaParse with LlamaIndex can create a RAG-based Job Assistant to match candidates with jobs using data extracted from CVs. The blog provides insights into text extraction and job matching solutions here.
  • Announcing Observability Integration with @langfuse: LlamaIndex announced their partnership with @langfuse to provide a two-line solution for adding observability to RAG pipelines, combining features like tracing, prompt management, and evaluation. The integration is aimed at enriching RAG applications with more robust monitoring capabilities, and details can be accessed here.

Links mentioned:


LlamaIndex ā–· #general (303 messagesšŸ”„šŸ”„):

  • Exploring Multi-Agent Chaining: A user discussed how to chain multiple OpenAI agents using LlamaIndex, referencing LlamaIndex documentation on deploying agents with tools. There was a mention of using FunctionTool and QueryEngineTool in this context, although they encountered an error requiring further clarification.

  • Xinference Support Inquiry: Two participants conversed about Xinference CPU support in LlamaIndex, sharing a local deployment guide and GitHub links. A member sought advice on installing in a cluster to enhance inference time, referencing Xinference integration with LlamaIndex.

  • Efficient Use of BM25 with LlamaIndex: A user sought assistance on how to employ a BM25-based model for embeddings within LlamaIndex, linking to relevant documentation examples. The conversation evolved into discussing appropriate settings for bm25 as a HuggingFaceEmbedding model.

  • Seeking Access to LlamaCloud: An individual inquired about accessing LlamaCloud, receiving guidance to sign up at https://cloud.llamaindex.ai, where they can gain access with a simple registration process.

  • Discussion on Embedding Models and Compatibility: A member brought up ChatGPT’s performances concerns with regards to embedding model changes over time and pondered whether using LlamaIndex to manually set embedding models could prevent such issues. They advised their client to re-upload files to possibly improve ChatGPT’s performance consistency.

Links mentioned:


LlamaIndex ā–· #ai-discussion (4 messages):

  • RAG Tutorial with LlamaParse: A new YouTube video was shared, titled ā€œRAG with LlamaParse, Qdrant and Groq | Step By Step,ā€ offering a walkthrough on creating an effective RAG using LlamaParse, Qdrant, and Groq.
  • Seeking RAG Preparation Tips: One member inquired about the top 5 tips for preparing a document for RAG and how to automatically add metadata to Pinecone for optimal retrieval, but specifics were not discussed.
  • AI with RAG Pipeline Insight: A Medium blog post titled ā€œEmpowering Voices: AI Assistant with RAG Pipeline, Memory, and LlamaIndexā€ was highlighted, providing insight into creating an AI assistant with these technologies.
  • RAPTOR Pack Integration with HuggingFace Models: A user is attempting to modify a RAPTOR pack implementation to use HuggingFace models instead of OpenAI’s but encounters errors, and has shared their modified code snippet from the llama_index GitHub repo’s example. They are asking for help with the correct coding implementation.

Link mentioned: RAG with LlamaParse, Qdrant and Groq | Step By Step: In this video, I will show you how to create a effective RAG with LlamaParse, Qdrant and Groq. I will explain what LlamaParse is and briefly walk you through…


Latent Space ā–· #ai-general-chat (202 messagesšŸ”„šŸ”„):

  • Deciphering Yann’s Perspective: Discussions reveal that Yann LeCun may favor visual or planning models over language because he lacks an inner monologue, as discussed in various tweets including a tweet from @kzSlider with a link to an interview with someone who lacks an internal monologue.
  • Views on Linguistic Reasoning: Members like mr.osophy argue that we overestimate our use of language for reasoning and that visual representations can be more directly mapped to actions, like learning a tennis serve by watching someone rather than translating the visual into language first.
  • Scenarios Leading AI Offerings: Scene-scenario’s new UPSCALER tool might prompt Magnific to reconsider pricing, offering image quality enhancement to 10k resolutions, as announced by @emmanuel_2m in a tweet.
  • GPT Models, Computing, and Frameworks Discussions: Points of interest include a comment on an extraordinary increase in compute leading to faster model creation by @granawkins, the offering of free GTC virtual session codes by @harpreetsahota, and an in-depth discussion about the scaling and tokenization of multilingual models driven by @picocreator.
  • Grok-1 Enters the Chat: The open release of xAI’s Grok-1, a 314 billion parameter Mixture-of-Experts model under the Apache 2.0 license, is highly anticipated but its effectiveness as compared to other models is questioned in a flurry of tweets and counter opinions. Details and reactions can be followed through Grok-1’s release page and the conversation thread starting with @gen_being_96’s mention of the model.

Links mentioned:


Latent Space ā–· #ai-announcements (2 messages):

  • Paper Club Session Alert: The community is currently diving into ā€œA Comprehensive Summary Of Large Language Modelsā€ paper in the paper club session. All interested are welcome to join and participate in the discussion.

  • Community Shoutout: A member’s work received attention on Hacker News, as highlighted by a fellow community participant.

Link mentioned: no title found: no description found


Latent Space ā–· #llm-paper-club-west (20 messagesšŸ”„):

  • Attention Mechanism Fundamentals Clarified: One member questioned why attention mechanisms were developed, leading to another explaining that attention was created to address the limitation of fixed-length encoding vectors in traditional models, allowing the decoder to access all relevant parts of the input sequence.
  • Understanding the Motivation Behind Using Attention: A member conveyed the lack of understanding intuition behind design choices in transformer-based models. Another participant responded, clarifying that attention allows decoders to focus on the most relevant parts of the input sequence.
  • Parallelization Perks of the Transformer Architecture: In the context of parallelization, a member elucidated that the attention mechanism in transformers allows processing different tokens independently and concurrently, leading to more efficient computation and faster training.
  • Acknowledging Efficient Computation in Transformers: The importance of scaled dot product operations was highlighted, which eliminates the need for ā€œwaitingā€ seen in sequential calculations, thus enhancing the efficiency of models like GPT.
  • Gratitude for Educational LLM Session: Participants thanked the session hosts, noting that their insights provided a deeper understanding of the lineage of large language models (LLMs) and helping to demystify the conceptual underpinnings of the technology.

Latent Space ā–· #ai-in-action-club (36 messagesšŸ”„):

  • Passive Participation: A member mentioned they are only passively participating due to being in an in-person meeting.
  • Experience with Loading Screens: An individual reported having experiences akin to the RAG (Retrieval-Augmented Generation) when encountering loading screens.
  • Retrieval-Augmented Generation Discussed: An article on advanced RAG was shared, explaining concepts from small to big retrieval.
  • Exchanging Research Ideas: Participants discussed alternative methods to cosine similarity in vector space models, referencing contrastive embeddings and suggesting the use of LLM for vector comparisons.
  • Shared Resource Spreadsheet for AI Topics: A Google spreadsheet containing topics, dates, facilitators, and resources for AI-related discussions was shared, highlighting the structured approach to group discussions.

Link mentioned: AI In Action: Weekly Jam Sessions: 2024 Topic,Date,Facilitator,Resources,@dropdown UI/UX patterns for GenAI,1/26/2024,nuvic,<a href=ā€œhttps://maggieappleton.com/squish-structure&quot;&gt;https://maggieappleton.com/squish-struct…


LAION ā–· #general (168 messagesšŸ”„šŸ”„):

  • Jupyter Notebooks Accessibility in Copilot: Microsoft Copilot Pro subscriptions provide free access to Jupyter Notebooks with libraries like simpy and matplotlib, much like the offering in ChatGPT Plus.

  • DALL-E 3 Dataset Update: The DALL-E 3 dataset was not removed from Hugging Face; it was relocated to a new location which can be found at Hugging Face’s dalle-3-dataset repository. Users can use the load_dataset() function with a specific revision for reproducibility as documented by Hugging Face.

  • Xformers Discussions for PyTorch Compatibility: Members discussed solutions for integrating xformers with PyTorch, including the use of virtual environments, a specific installation order, and the PyTorch index URL which guarantees a compatible version of xformers.

  • Leveraging Metadata in Captioning Prompts: One member shared insights about using metadata hints in captioning prompts to improve AI-generated text accuracy. An example script utilizing cog captioning was mentioned, which could be accessed at EveryDream2trainer’s caption_cog GitHub script.

  • Questions About Vast.AI and Security: Users discussed the operation and security aspects of running containers on Vast.AI, noting that it does not adhere to strict security standards like HIPAA. Vast.AI operates by providing SSH access to a container running on a host machine, but ultimately, the host has root access, so sensitive tasks should use major cloud providers instead.

Links mentioned:


LAION ā–· #research (13 messagesšŸ”„):

  • Misconception About Web UIs on Colab: A member expressed concern that web UIs are considered risky, but another clarified that you simply can’t use free Colab with them.
  • Cross-Channel Confusion: There was a brief exchange where a participant mistakenly thought that a discussion about web UIs was related to cutting-edge research.
  • Document Shared on Multimodal World Model: A link to a Google document describing a Generative Audio Video Text world model was posted but without further context or discussion.
  • Exploration of Continuous Pretraining of LLMs: A research paper was shared, discussing efficient methods for continuous pretraining of Large Language Models (LLMs) with a focus on overcoming distribution shift challenges.
  • Code Repository for Grok Open Release Introduced: A GitHub link to xai-org/grok-1 was shared, referencing an open release of Grok without additional comments or discussion.
  • Rumors About Nvidia’s GPT-4 Architecture: Discussion revolved around the rumor that Nvidia confirmed GPT-4 has a Mixture of Experts (MoE) architecture with 1.8 trillion parameters, yet it was noted that this isn’t necessarily referring to GPT-4. An image from Twitter was mentioned to support the claim.

Links mentioned:


OpenAccess AI Collective (axolotl) ā–· #general (99 messagesšŸ”„šŸ”„):

  • Training Dilemmas with Llama Chat Model: Discussing the limitations of LoRa training on the llama chat model, members expressed concern that training with completion data might not impact the chat behavior significantly due to information loss when converting from raw to Q/A format. A conversion to a sharedGPT format using axolotl was proposed, with subsequent tokenization into llama-2 chat format.

  • Axolotl As A Training Tool: Axolotl, a tool used for fine-tuning models, was highlighted as a useful alternative to directly setting up transformers code, allowing users to pass a YAML file instead. There were questions about memory optimizations in Axolotl compared to LoRa, and if it wrapped around the HF (Hugging Face) training ecosystem for easy config.

  • Exploring Quantization Methods: A debate ensued over the most effective quantization method, with AQML mentioned as potentially superior to AWQ. The discussion included a GitHub repository for AQLM, highlighting its use for extreme model compression.

  • Anticipation for NVIDIA’s Next-Gen RTX 5000 Series: There was excitement about Nvidia’s upcoming RTX 5000 series, based on leaks suggesting significant VRAM and bandwidth improvements. A TechPowerUp article detailed potential specifications, sparking a conversation on the implications for AI training.

  • Grok-1 Weights and Sequoia Framework Unveiled: Grok-1 model weights release was shared, leading to inquiries about its potential MoE setup and benchmarking needs versus models like Mixtral. The introduction of Sequoia, a speculative decoding framework to speed up LLM inference, was highlighted, with a link to Together AI’s Sequoia blog post discussing its performance on consumer-grade GPUs.

Links mentioned:


OpenAccess AI Collective (axolotl) ā–· #axolotl-dev (24 messagesšŸ”„):

  • ScatterMoE Optimizations Excite Developers: Discussion on a new ScatterMoE model optimization that significantly outperforms Huggingface implementation and MegaBlocks. The repo can be found on GitHub.
  • In the Pipeline: Training with ScatterMoE: A member asked about the specific configuration for training with ScatterMoE, indicating they would appreciate guidance on running a test to check for normal loss behavior.
  • Torch Upgrade Considered: Conversation about upgrading Axolotl’s PyTorch version to at least 2.2 due to incompatibility with the new kernels on torch 2.0.1. A member confirms they are using torch 2.2.1.
  • Mixed Feedback on Grok Weights: There’s curiosity and skepticism about using Grok weights with Axolotl, members mentioned an attempt with 314B Grok weights, but noted the performance was not impressive considering the model size.
  • Ambitions for qLoRA with Grok: Discussion about whether Axolotl qLoRA FSDP could handle the large Grok models, while another member points out that the released Grok may only be the int8 checkpoint, referencing a GitHub link.

Links mentioned:


OpenAccess AI Collective (axolotl) ā–· #general-help (35 messagesšŸ”„):

  • Troubleshooting Tokenizer Quirks: In addressing tokenizer issues, a participant is finetuning an instruct model for document summarization but encounters a problem where the model inconsistently generates the <summary> tag, sometimes with an unexpected space in front of it. It was suggested to look into potential causes such as whitespace characters.

  • Local Dataset and Model Configuration Woes: A newcomer to LLM is learning how to point a fine-tuning configuration to local datasets and models rather than pulling from Hugging Face. The correct approach is to specify the full file path rather than a relative path, and make sure it’s not formatted as a repo identifier that throws a HFValidationError.

  • Dataset Dilemmas and JSON Structure: A member is experiencing ā€˜index out of range’ errors while trying out different data types for a conversational dataset. It is suggested to check for empty rows, double-check the readme for conversation options, and ensure that ā€˜role’ in the dataset array isn’t empty.

  • Bug Hunting: Evaluating Role Mapping and Sample Packing: One user reported that a role map was missing ā€œbard,ā€ ā€œbing,ā€ and ā€œchatgptā€ in their test dataset. Another discovered an issue where an eval set was considered too small for sample packing during a 2-epoch run but not during a 10-epoch run; a potentially epoch-independent validation bug.

  • Building Custom Completion Datasets: For building a custom completion dataset, members are advised it’s as simple as making a jsonl file with each line containing a JSON object with the text attribute containing the completion text. The README file contains instructions on how this can be accomplished.


OpenAccess AI Collective (axolotl) ā–· #datasets (8 messagesšŸ”„):

  • NVIDIA NeMo-Curator Introduced: A member linked to NVIDIA’s NeMo-Curator, a scalable toolkit for data curation on GitHub, asking if anyone has tried it.
  • Search for Hybrid Mistral Model: A participant inquired about a Mistral FT specifically trained on both the orca-math-word-problems-200k dataset and the nvidia/OpenMathInstruct-1 dataset.
  • Lack of a Combined Mistral Model Noted: One member stated that they don’t believe a combined model exists, mentioning the size of NVIDIA’s dataset as a possible reason.
  • Assessing the Potential of Model Merging: A conversation took place around mergekit and the possibility of using model merging as an alternative to training on orca-math-word-problems-200k dataset.
  • Strategy for Merging Models Explored: In discussing the use of mergekit, it was advised to ensure that models to be merged are trained with the same chat format, opening a conversation about further fine-tuning for compatibility.

Link mentioned: GitHub - NVIDIA/NeMo-Curator: Scalable toolkit for data curation: Scalable toolkit for data curation. Contribute to NVIDIA/NeMo-Curator development by creating an account on GitHub.


OpenAccess AI Collective (axolotl) ā–· #rlhf (1 messages):

duh_kola: Is it possible to use different lora adapter to do dpo on another model


CUDA MODE ā–· #general (43 messagesšŸ”„):

  • Photonics Chip Insights Shared: A video was shared discussing the breakthrough in photonics chips that are claimed to be 1000 times faster, with the source from an Anastasia’s tech update video on YouTube, and related information from Lightmatter, a photonic supercomputer company. The video titled ā€œNew Chip Breakthrough: x1000 fasterā€ can be found at Photonics Chips YouTube Intro.

  • Alternative GPU Profiling Services Discussion: Users discussed cloud GPU services for profiling CUDA kernels. RunPod.io and LambdaLabs were suggested, but a user experienced permission issues with NVIDIA GPU Performance Counters on RunPod, planning to report back on LambdaLabs’ service.

  • Model Orchestration Query: Flyte’s popularity for ML pipeline orchestration was queried among the predominantly machine learning-focused group, but no further discussion or answers were provided.

  • PyTorch Tensor Management Explained: The ethos behind PyTorch’s explicit tensor device management was discussed, highlighting the philosophy of no unexpected behavior or magic in terms of device handling in contrast to TensorFlow’s implicit device placement and its costs.

  • Questions on LLM Access and Introductions: Inquiries were made regarding free LLM APIs, easy to deploy open source models, and the proper channels for introductions and long questions within the community. No particular recommendations or orientation guidelines were provided.

Links mentioned:


CUDA MODE ā–· #triton (7 messages):

  • Triton Debugging Visualizer Launched: A new visualizer for Triton debugging was introduced, designed to facilitate the viewing of the spatial structure of load/stores during complex function implementation. No visuals or links were provided.

  • Triton Puzzles for Practice: A set of Triton Puzzles has been created to help users familiarize with Triton, noted to be more challenging than GPU puzzles. There are 2 known bugs with the visualizer: occasional double visualization and segmentation faults.

  • Resource Request for Learning Triton: A member asked for guides or resources to learn Triton for someone already proficient in CUDA, and was directed to try the newly shared Triton Puzzles and the official website tutorials.

  • New Triton Puzzle Interpreter Feature: A new interpreter feature that runs Triton on CPU was mentioned, indicating that the puzzles do not require a GPU. A minor text correction was also pointed out: ā€œsinceā€ should replace ā€œsenseā€ in the instructions.

  • Positive Reception to Triton Puzzles: The introduction of Triton Puzzles was met with enthusiasm from other members, with expressions of interest in checking them out.

Link mentioned: Google Colaboratory: no description found


CUDA MODE ā–· #cuda (68 messagesšŸ”„šŸ”„):

  • Understanding Warp Schedulers and Thread Optimization: A member inquired about the configuration of warp schedulers and the number of threads they control, seeking to optimize CUDA efficiency and occupancy by understanding the simultaneous thread execution capabilities.

  • Clarifying the Meaning of ā€˜Active Warp’ in CUDA: Discussion revolved around what constitutes an ā€˜active warp’ within CUDA. An ā€œactive warpā€ was tentatively defined as having at least one active thread, despite some debate over the technical possibility of an active warp having no active threads due to conditional code skips.

  • Managing Producer and Consumer Memory: The chat hosted an extensive discussion on memory management for producers and consumers in CUDA environments. It was clarified that ProducerProvides and ConsumerTakes are not related to memory allocation but to ownership and usage within iterations of data processing.

  • Benefits of a Memory Manager: The use of a memory manager was proposed to simplify complex memory handling across various hardware and execution scenarios, leading to clearer code with fewer explicit details, potentially avoiding idle GPU times.

  • Sharing CUDA Project Structure Best Practices: Queries concerning CUDA project structure arose, particularly regarding whether the main() function should be placed in a cpp or cu file and the method of including a kernel function from a cu file. This led to a broader conversation about project separation into .cuh, .cu, and .cpp files.

Links mentioned:


CUDA MODE ā–· #suggestions (5 messages):

  • Exploring Reconfigurable Computing and ML: A member shared a YouTube link to Prof. Mohamed Abdelfattah’s research group at Cornell University, highlighting their work on reconfigurable computing and efficient machine learning. The research group’s website with further information was also provided: www.mohsaied.com.

  • Hardware-centric ML Course - ECE 5545 Revealed: Another link was shared detailing a master’s level course ECE 5545 (CS 5775) which focuses on machine learning systems from a hardware perspective, inclusive of optimizations like pruning and quantization and the design of both hardware and software components. The course objectives encourage understanding and optimizing ML algorithms for various hardware platforms. The full syllabus is available for those interested: Please read the syllabus.

  • Textbook Mystery in ML Course Syllabus: A point of confusion was brought up regarding the unidentified ā€œtextbookā€ mentioned on the course website for ECE 5545, as it was not directly listed.

  • Locating Course Textbook in Lecture Video: The mystery of the missing textbook title for the ECE 5545 course was resolved with a tip that the first lecture video mentions it.

Links mentioned:

  • ML Hardware and Systems: no description found
  • Prof. Mohamed Abdelfattah: This is the channel for Prof. Mohamed Abdelfattah's research group at Cornell University. We are researching reconfigurable computing and efficient machine learning. For more information check out...

CUDA MODE ā–· #jobs (1 messages):

vim410: Depends. But yes.


CUDA MODE ā–· #beginner (5 messages):

  • Solid CUDA Foundation for ML Transition: A PhD student in computational fluid dynamics with beginner to intermediate CUDA skills expressed their intention to switch to CUDA computing for ML. They listed their proficiency in various CUDA concepts and planned to use the server’s lectures and Andrej Karpathy’s Zero to Hero series as a learning path.

  • DL Framework Play Recommended: A member suggested to the PhD student that they should start by experimenting with a DL framework such as torch, explaining that ML/DL optimization largely revolves around matrix multiplications, nonlinearities, softmax, and normalization functions.

  • Book Recommendation for Advanced CUDA Learning: The same member recommended the book Programming Massively Parallel Processors as a resource to deepen the PhD student’s CUDA knowledge, sharing an Amazon link to the book.

  • Cued Up for General CUDA Expertise: It was noted that while Programming Massively Parallel Processors has some DL content, it is mainly an excellent resource for general CUDA programming. The emphasis is on the comprehensive nature of the book for CUDA rather than its ML/DL content.

Link mentioned: no title found: no description found


CUDA MODE ā–· #pmpp-book (6 messages):

  • CUDA Indexing Confusion Resolved: A member questioned the indexing calculation in chapter 2, question 2, specifically about why not use i = blockIdx.x * blockDim.x + threadIdx.x * 2. It was clarified that such an approach causes double-counting of indices and an example was provided demonstrating the issue.

  • Query about Blogging Exercise Solutions: A participant considered blogging their solutions to the exercises and expressed uncertainty due to lack of response from the authors about sharing potentially instructor-only content. They also shared the concern of not having an educational email address to contact the authors.

  • Potential Check with the Authorities: In response to concerns about blogging exercise solutions, a member offered to check with Wen-mei, presumably one of the authors or figures of authority related to the content, for proper guidance.


CUDA MODE ā–· #ring-attention (14 messagesšŸ”„):

  • Discussion on Attention Matrix Materialization: A blog post writer inquires about the memory requirements for the attention matrix in blockwise attention methods, citing potential confusion in Liu’s papers regarding Ring Attention. It is mentioned that Liu states an advantage of linear memory scaling with the block size c, but questions arise when comparing this to the squared chunklength memory requirement mentioned for FlashAttention.
  • Seeking Clarification on Memory Scaling: A discussion unfolds regarding the memory scaling properties of attention mechanisms, especially under FlashAttention and RingAttention. Members acknowledge a potential discrepancy between the perceived linear memory scaling and the actual implementation within SRAM constraints, indicating complexities in chunk size management.
  • Striped Attention’s Improved Workloads: A member provides a link to a paper that introduces Striped Attention, which addresses workload imbalances in causal transformer models by distributing tokens uniformly and achieving better throughput.
  • Dissecting FlashAttention Optimization: Members delve into the inner workings of FlashAttention’s computation and memory savings, suggesting that sequential updates with kv-blocks on q-output pairs may avoid the need to form a c^2 matrix. There’s a shared interest in thoroughly understanding and potentially becoming experts on FlashAttention.
  • Potential Misinterpretation in Ring Attention Memory Requirements: In trying to make sense of the memory requirement statements, members speculate that there might be a misunderstanding with the term linear scaling. Suggestions hint towards linear scaling possibly being in relation to the number of blocks, rather than the block size itself.

Links mentioned:


CUDA MODE ā–· #off-topic (5 messages):

  • MLSys 2024 on the Horizon: Members are preparing for the MLSys 2024 conference in May, a key event at the intersection of Machine Learning and Systems. The conference emphasizes interdisciplinary focus and optimization of AI systems as integral to developing efficient and effective AI.

  • Poetic Conference Titles: A comment humorously observed that the phrase ā€œThe Conference for the Era of AIā€ from the MLSys conference description fits iambic pentameter, a common meter in poetry.

  • Smartphone Woes: A member made a brief quip about a phone not meeting expectations, mockingly calling it a ā€œNot so smart phone.ā€

  • Math Mysteries on Smartphones: A discussion emerged about the correct way to interpret mathematical operations on smartphones, with one member asserting that the order of multiplications/divisions should be from left to right.

  • Calculator Conundrums Solved: The correct treatment of mathematical expressions on calculators was clarified, noting that scientific calculators differentiate between expressions like ax and aƗx.

Link mentioned: MLSys 2024: no description found


CUDA MODE ā–· #gtc-meetup (9 messagesšŸ”„):

  • Marksaroufim is GTC-bound on Monday: Marksaroufim announced plans to be at GTC on Monday morning and offered to share contact details for meetups.
  • Neurondeep’s Extended GTC Visit: Neurondeep plans to be present from March 14th to 25th and intends to attend every day of the event.
  • t_vi Eyes on GTC Meetup: t_vi is also attending and is looking forward to meeting up with others.
  • Marksaroufim Might Stay the Week: Initially planning for 1-2 days, Marksaroufim might end up staying the entire week at GTC, hoping for ā€œdecent wifiā€.
  • Mr.osophy Tackles GTC Meme Culture: Mr.osophy humorously suggests there should be memes about being unable to attend GTC and shared a past failed attempt to volunteer for a free pass.
  • Iron_bound Shares Sneaky Inspiration: In response to the discussion, iron_bound posted a YouTube video link as a humorous suggestion to sneak into events like GTC: ā€œI Snuck Into A Secret Arms-Dealer Conferenceā€.

Link mentioned: I Snuck Into A Secret Arms-Dealer Conference: Get an exclusive video every month at https://www.patreon.com/Boy_BoyWe made this in collaboration with the legendary Australian political satire group The C…


OpenRouter (Alex Atallah) ā–· #general (159 messagesšŸ”„šŸ”„):

  • Prompt Structure Clarification for Llama Models: A user asked if a specific JSON format containing ā€œsystemā€, ā€œuserā€, and ā€œassistantā€ keys could be used with llama models, while using the OpenAI JavaScript library. The inquiry was confirmed with a simple ā€œYes!ā€ by another participant.

  • Top Up to Pay: When asked about the payment method for using the chatbot models due to the absence of a required credit card connection, a user was advised to simply ā€œtopup your balance.ā€

  • Model Preference for Roleplay Consistency: Amidst discussions on the most consistent AI models for roleplay that avoid repetition and randomness, Sonnet was lauded as the best, with a confirmation that ā€œit’s not even close.ā€

  • Effective Prompt Formation for Chat Models: In a discussion about chat models disregarding system messages, it was suggested to place the system message first. A user shared insights that often a system’s context can be established within user or assistant messages, ultimately advising to follow the principle of ā€œjailbreakingā€ the model.

  • Analyzing Book Segments for Prompts via Automation: A user mentioned running a script that breaks down books and queries a model for corresponding prompts, using JSON formatted outputs. They noted that lzlv 70B yields better and more consistent results without errors compared to other models, which sometimes produce irrelevant data.

Links mentioned:


LangChain AI ā–· #general (95 messagesšŸ”„šŸ”„):

  • Seeking Clarity on API Futures: A member inquired about whether to use astream_log or astream_events and questioned if one might be deprecated in favor of the other, recognizing that astream_events is currently in beta.
  • Upcoming Advanced Research Assistant & Search Engine: paul16307 announced an advanced research assistant and search engine project, inviting members to become beta testers and offering 2 months of free premium access. The service includes models like Claude 3 Opus, GPT-4 Turbo, and Mistral Large, hosted on Groq’s servers, and a sign-up waitlist is available at Rubik’s AI.
  • Documentation Dive: Community members discussed the complexity of LangChain documentation, with namenot223_69478 expressing that the docs are challenging for beginners. evolutionstepper advised reading the code and API reference after getting comfortable with the basics, while .bagatur requested feedback on specific confusing pages or suggestions for new ones.
  • Call for Beta Testers for AI-Powered Tutor: theoneandonly414 advertised an AI-powered tutor and AI whiteboard, asking members to join the waitlist at Bloon AI. While they are part of the operations team, the founder is handling development, and funding is currently being raised.
  • Query on Structured LLM Output: gryhkn requested help generating structured output from LLMs using LangChain, prompting widium to provide a comprehensive Python example utilizing Pydantic for city data output, a demonstration appreciated by gryhkn.

Links mentioned:


LangChain AI ā–· #langserve (45 messagesšŸ”„):

  • Trouble Streaming with RemoteRunnable: A member is facing issues with RemoteRunnable not streaming output when used in JavaScript. The code is expected to call /stream, but it only calls /invoke even though the same approach works correctly in Python.
  • Searching for a Solution: They’ve verified that the Python implementation of RemoteRunnable streams properly and are trying to write equivalent code in JavaScript using LangChain.js.
  • Reaching Out for Help: The member is advised on how to reach the LangChain team for support, including reporting an issue on GitHub and contacting via email.
  • No Known Fixes in Recent Changes: Finally, the member inquires if there were any changes in the last month that may have resolved streaming issues; however, there is no information on recent updates that could have addressed the problem.

Links mentioned:


LangChain AI ā–· #share-your-work (11 messagesšŸ”„):

  • AI Chatbot for Data Analysis: A new GitHub repository langchain-chatbot has been introduced, featuring an AI chatbot designed for analyzing and extracting information from data in conversational format.
  • Personal Bookmark Assistant is Born: The development of a Discord AI chatbot for managing Raindrop.io bookmarks has been announced, with the project available on GitHub as open source.
  • Seeking Input for Health & Productivity Digital Advisor: An invitation has been extended for individuals working in tech or professional services to join a 30-minute call to contribute insights for developing a digital advisor, with bookings available via Calendly.
  • Scrapegraph-ai: A Python scraper powered by AI and LangChain, capable of scraping content with API keys, has been launched. The tool already boasts over 2300 installations and can be found on GitHub.
  • Automating Sales and Research with AI: A challenge to build an AI capable of replacing the SDR/AE role led to the creation of a multi-agent automation framework utilizing Lyzr Automata and OpenAI hosted on AWS. The project can be explored on GitHub.

Links mentioned:


LangChain AI ā–· #tutorials (2 messages):

  • Nutriheal - AI-Driven Personalized Nutrition: A personalized nutrition AI app for patients, Nutriheal, was highlighted showing the use of Ollama, Open-webui, and Langchain’s Pebblo integration by Daxa AI. Watch the creation process in ā€œMaking an AI application in 15 minutesā€ on YouTube and explore more at navvy.co.

  • Plan-and-Execute with Langgraph: A YouTube video called ā€œPlan-and-Execute using Langgraphā€ shares insights on creating a plan-and-execute style agent, drawing inspiration from the Plan-and-Solve paper and the Baby-AGI project. The method involves generating a plan before executing it in the environment.

Links mentioned:

  • Plan-and-Execute using Langgraph: how to create a "plan-and-execute" style agent. This is heavily inspired by the Plan-and-Solve paper as well as the Baby-AGI project.The core idea is to firs...
  • Making an AI application in 15 minutes: Stack- Custom UI and RAG: A tweaked version of Open-webui- Local LLM Hosting: Ollama for locally hosted LLMs.- Data Privacy: Integrates Pebblo by DaxaAI to e...
  • Home: I’m deeply passionate about AI. Let’s connect to unlock AI’s potential and collaborate on innovative projects!

Interconnects (Nathan Lambert) ā–· #other-papers (8 messagesšŸ”„):

  • Deciphering API-Protected LLMs: A new study outlined in an arXiv paper demonstrates that it is possible to extract substantial non-public information about commercially available LLMs through API calls. The research relies on the softmax bottleneck characteristic of modern LLMs, which suggests model outputs are restricted to a linear subspace, allowing for various analytical advantages, including estimating an LLM’s hidden size.

  • Estimation of GPT-3.5’s Size Causes Stir: Discussion in the channel revolves around recent findings that estimate OpenAI’s gpt-3.5-turbo to be around 7B in size. A user expresses surprise at this number, while others are skeptical about the accuracy of this estimation.

  • Skepticism Over Model Size Accuracy: Nathan Lambert, a participant in the conversation, suggests that the 7B size estimation for gpt-3.5-turbo might be incorrect. There is an implication that the estimation doesn’t hold up if gpt-3.5 is inferred to be a Mixture of Experts (MoE) model.

  • Potential Disruption by Distillation Techniques: The skepticism continues with a hypothesis that unless there is some ā€œmega distillation sauce,ā€ the size estimate for gpt-3.5-turbo is likely erroneous. This suggests reliance on advanced techniques to justify the model’s capabilities given the estimated size.

  • Performance Boost through Model Mixture Strategy: A user points to a strategy mentioned in a paper from the previous year, suggesting that performance can be significantly enhanced by using outputs from a more capable model such as GPT-4 at the start and then continuing with a different model like llama for processing math problems. This might indicate a similar approach could be in use with turbo models.

Link mentioned: Logits of API-Protected LLMs Leak Proprietary Information: The commercialization of large language models (LLMs) has led to the common practice of high-level API-only access to proprietary models. In this work, we show that even with a conservative assumption…


Interconnects (Nathan Lambert) ā–· #ml-drama (19 messagesšŸ”„):

  • Twitter Drama Anticipation: A member linked to a Twitter post by Sebastian Raschka suggesting it could stir up drama within the machine learning community.
  • Open Source Definition Debate: There was a discussion regarding the Open Source Software (OSS) community potentially crystallizing a stance on what qualifies as open source, especially relating to whether data should be included.
  • Shared Understanding for Open Source: Another conversation took place about the need for a shared understanding within open source licensing, highlighting the differences between Apache 2.0 and GPLv3 licenses.
  • Drama Unfolds on Twitter: The member referenced a Tweet by @BlancheMinerva, indicating an ongoing disagreement which seems to add to the drama.
  • Frustration and Disengagement from Twitter: A member expressed frustration with EleutherAI’s approach to online discussions, mentioning that they would limit their Twitter activity and focus more on blogging after a stressful interaction.

Link mentioned: Tweet from Stella Biderman (@BlancheMinerva): @natolambert @felix_red_panda You’re wrong though :P


Interconnects (Nathan Lambert) ā–· #random (63 messagesšŸ”„šŸ”„):

  • Grok-1 Model Weights Released by xAI: The 314B parameter Mixture-of-Experts model Grok-1’s weights, trained by xAI, have been released as open source under the Apache 2.0 license. This release only includes the base model, not fine-tuned, and can be found on GitHub.

  • Grok-1 Size Surprises Community: The community expressed surprise at the 314B size of the Grok-1 model; discussions revolved around its potential training dataset size, given the model’s large scale and its possible connection to Chinchilla’s principles.

  • Grok Compared to Other Models: Grok-1’s metrics were juxtaposed against another model, Falcon, with users noting that Grok indeed has better performance on benchmarks like GSM8K and MMLU.

  • Release Mechanism Sparks Concern: The distribution of Grok-1 via a magnet link sparked discussion about the right way to release models, touching upon policy considerations and the credibility of ā€œopenā€ teams. This highlighted a general concern about how powerful models should be made available to the public.

  • Novel Distribution Ideas: A humorous speculation on the feasibility of mailing physical drives with model weights came up as a comparison against the cost of cloud services, leading to a wider conversation about the practicalities of model distribution.

Links mentioned:

  • Open Release of Grok-1: no description found
  • Have We Reached Peak AI?: Last week, the Wall Street Journal published a 10-minute-long interview with OpenAI CTO Mira Murati, with journalist Joanna Stern asking a series of thoughtful yet straightforward questions that Murat...
  • Tweet from Grok (@grok): @elonmusk @xai ā–‘Wā–‘Eā–‘Iā–‘Gā–‘Hā–‘Tā–‘Sā–‘Iā–‘Nā–‘Bā–‘Iā–‘Oā–‘
  • Tweet from Xeophon (@TheXeophon): Chinchilla doesn’t apply to MoE directly, does it? If it does, we can infer the training data set size for Grok. It’s unexpectedly large, so I guess they went for optimality first, given the little ti...

Alignment Lab AI ā–· #general-chat (6 messages):

  • Exploring Aribus Applications: A member shared a Twitter link to Aribus, prompting discussion about what can be built with it. Follow-up messages indicated some confusion about the post’s content.
  • Search for HTTP Response-Trained Embeddings Model: A member inquired about an embeddings model specifically trained on HTTP responses, expressing uncertainty on where to start looking.
  • Transformers as Embeddings Models: The same member pondered whether any transformer model trained accordingly could serve as an embeddings model.
  • Request for Mistral Fine-Tuning on Specific Datasets: A member asked if anyone has a version of the Mistral model fine-tuned on both the orca-math-word-problems-200k dataset and nvidia/OpenMathInstruct-1 dataset.
  • A Friendly Hello: A user briefly greeted the chat with ā€œhiā€.

Alignment Lab AI ā–· #oo (32 messagesšŸ”„):

  • Taming Grok 1 Requires Heavy Resources: A member mentioned Grok 1 as a ā€œhuge beastā€ that only OpenAI and XAI have tried to manage. They indicated a significant need for resources, including a high amount of compute power, data, and time dedicated to experiments.
  • Efficient MoE Training Infrastructure in Place: Existing infrastructure for training MoE (Mixture of Experts) models at almost 100% efficiency was mentioned, but there is a pressing need for computational resources.
  • Mixed Views on Training Grok 1: While some members are willing to contribute to fine-tuning Grok 1, others questioned its performance relative to costs and suggested alternative models might offer better efficiency for the investment.
  • Concerns Over Grok 1 Performance and Data Quality: Discussions highlighted the challenges Grok 1 faces, particularly with the mmlu benchmark and a poorly performing MoE model. Questions arose on the optimal data mix and the timeline for achieving results, as well as who would benefit from advancing this model.
  • Comparison with High-Performance Models: Grok 1 was compared to industry-leading models like GPT-4 and Claude, stating that while it may not surpass GPT-4, it has the potential to be the best open-source large language model, possibly outperforming Claude 2.

Link mentioned: keirp/hungarian_national_hs_finals_exam Ā· Datasets at Hugging Face: no description found


LLM Perf Enthusiasts AI ā–· #general (1 messages):

  • The Lazy Developer’s Creed: A member expressed that Devin has inspired a minimalist approach to app development, favoring tools that avoid manual tasks like pasting into terminals. They argue that anything more complex than a local app with filesystem control is unnecessarily complicated, suggesting dissatisfaction with current open-source options.

LLM Perf Enthusiasts AI ā–· #claude (7 messages):

  • Tweet Sparks Controversy: Recent tweet by @tszzl suggests that Anthropic might be playing the role of ā€œcontrolled oppositionā€ to instill fear among technical staff workers. The exact motive and accuracy of this claim were discussed, with a member affirming the tweet’s veracity.

  • Content Moderation Woes: A member reported difficulties with content moderation on images containing people, mentioning that sometimes the system just refuses to operate correctly.

  • Curiosity about Claude: Claude Sonnet’s efficacy at large scale usage was inquired about, particularly for a project that could involve several dozen million tokens per month.

Link mentioned: Tweet from roon (@tszzl): anthropic is controlled opposition to put the fear of god in the members of technical staff


LLM Perf Enthusiasts AI ā–· #reliability (16 messagesšŸ”„):

  • Introducing the Knowledge Processing Unit (KPU): Maisa announces the KPU, a new framework claimed to enhance LLMs by separating reasoning and data processing, aiming to solve complex tasks. The KPU is reported to outperform well-known models like GPT-4 and Claude 3 Opus according to Maisa’s blog post.

  • Benchmarks Stir Skepticism: Members expressed skepticism regarding Maisa’s benchmark comparisons. They questioned why KPU + GPT-4-turbo was compared to GPT-4 without turbo, instead of comparing like versions.

  • Dissecting the KPU Architecture: The technology behind KPU is believed to include self-evaluation techniques and innovative context window management. However, there’s still confusion among members about the specificity of the tech.

  • Rate of return questionable: Concerns raised about the absence of latency information for the KPU, highlighting the trade-off between improved performance on tasks like MATH and the potential increase in latency affecting practical application in products.

  • CEO Clarifies KPU Functionality: Maisa’s CEO took to Twitter to explain that KPU is an AI architecture enhancing LLMs, acting like a GPU for knowledge management without chunking or embeddings. Further, the CEO offers an API key for researchers wishing to conduct their own evaluations and a notebook, detailed in a tweet by @davipar.

Links mentioned:

  • Tweet from David Villalón (@davipar): happy to answer! it is not a new model, indeed KPU is agnostic to intelligence providers (OpenAI, Antrophic...). It is a new AI architecture to work with LLMs that leverages their reasoning capabiliti...
  • KPU - Maisa: AI-Powered Knowledge Processing Platform. A simple API for executing business tasks. Abstracting the complexities of using the latest AI architectures for software and app developers

LLM Perf Enthusiasts AI ā–· #openai (1 messages):

res6969: https://x.com/leopoldasch/status/1768868127138549841?s=46


DiscoResearch ā–· #general (21 messagesšŸ”„):

  • DiscoLM Struggles with German Responses: DiscoResearch/DiscoLM-mixtral-8x7b-v2 model fails to generate a response in German even after instruction fine-tuning on a specific dataset. The issue also extends to a ValueError encountered during sequence classification fine-tuning.

  • Technical Difficulties with Local Deployment: A user faced problems when running DiscoLM-70b through vllm and experienced hanging issues on a machine with ample resources. Sample server set-up and response codes provided may indicate potential compatibility issues.

  • Insights into German Language Models: A Reddit post provides feedback on various German language models, with observations on the performances of SauerkrautLM Una SOLAR Instruct, Starling LM 7B Alpha, among others. Images from the Reddit thread illustrate users’ experiences with the mentioned models (Reddit Feedback).

  • Grok Model Discussion: A member shared a GitHub link to the Grok model, discussing its size of 314B parameters and pondering who might be able to run it, with another member noticing it needs only 300GB.

  • Exploration of Benchmarks for German LLMs: Discussions included potential improvements for German language models, a potential German language benchmark paper called supergleber-german-language-evaluation-benchmark, and suggestions to engage with university research resources. Links to related benchmarks were shared, suggesting an opportunity for collaboration (Informatik Uni Würzburg, OpenReview).

Links mentioned:


DiscoResearch ā–· #discolm_german (4 messages):

  • Disco Bot Sticks to the Script: A member noted that the model is designed to adhere to the system prompt, but variations may be tested for optimal results. The demo does not use special settings and operates on fastchat/vllm.

  • Server Migration Mixed Success: There’s been a server transfer for the demo, moving from a home kitchen setup to a more robust location. However, the transition has faced networking issues, with hopes to resolve them early next week.

  • Kitchen vs Professional Hosting – A Wry Observation: A user humorously commented that a hobbyist server in a kitchen may run without a hitch, whereas a professional hosted server is more prone to problems like networking issues and hardware failures.


Datasette - LLM (@SimonW) ā–· #ai (20 messagesšŸ”„):

  • Prompt Engineering Tools by Explosion: A member mentioned having developed tools to assist with prompt engineering, integrated into the Prodigy product by Explosion, suggesting the setup can be insightful for others interested in prompt engineering, available at Prodigy Features.

  • Prompt Engineering as Data Annotation: The same member described prompt engineering as a data annotation problem, recognizing its limitations but also acknowledging the utility of such an approach in certain contexts.

  • Open-source Prompt Testing Tool: Another member discussed trying out the open-source tool PromptTools found on GitHub, which allows for prompt testing and experimentation but lacks version management.

  • Vercel’s Prompt Comparison Interface: The Vercel AI SDK was mentioned for its user-friendly interface to compare model responses for a single prompt, with a follow-up observation noting its similarity to the PromptTools playground.

  • Helicone AI for Dreamy Prompt Management: Helicone AI was introduced as the closest available solution to one member’s dream of a comprehensive prompt management system, though at the time of discussion it was only beginning to develop prompt management and analysis features.

  • Innovative Experiments in Content Translation: A member shared an experiment in content translation using different personas, with a proof of concept built on GPT-3.5-turbo. They provided a link to an example blog post with a translation experiment at the bottom: How to Build a Buzzword.

Links mentioned:


Datasette - LLM (@SimonW) ā–· #llm (1 messages):

obra: Is it possible to recover the seed used by the openai models for a previous api request?


Skunkworks AI ā–· #general (17 messagesšŸ”„):

  • Promising Method Awaiting Release: baptistelqt mentions preparing a paper/article about a method that improves global accuracy and sample efficiency in model training. They are working on better charts and results structure before release.

  • Challenges Scaling to Large Models: The member baptistelqt indicates a need to test their method for effectiveness on larger models, citing a lack of resources to empirically prove it at scale.

  • Seeking Compute Resources for Scaling: far_el. expresses interest in baptistelqt’s work and offers to discuss potential allocation of compute and resources to scale the unproven method.

  • Calls for Collaboration on Quiet-STaR: satyum inquires about joining the ā€œQuiet-STaR: Language Models Can Teach Themselves to Think Before Speakingā€ project, with far_el. responding by asking about proficiency in PyTorch and transformer architectures.

  • Evidence of Method’s Impact on VGG16: baptistelqt shares preliminary results comparing their method against base training on VGG16 with a subset of CIFAR100, showing improved test accuracy from 0.04 to 0.1 after just 1 epoch.


Skunkworks AI ā–· #off-topic (1 messages):

pradeep1148: https://www.youtube.com/watch?v=ZlJbaYQ2hm4