Itās often said that a key sign a startup has gone bad is āshipping and dippingā - the tendency of top performers who have decided to leave, to try to ship something and leave on a high note. This has just happened to both:
- Inflection AI, who shipped 2.5 two weeks ago and now are seeing the two executive cofounders leave for Microsoft, and
- Stability AI, who shipped Stable Diffusion 3 two weeks ago and now have Rombach et al departing.
Senior departures are a fact of life in chaotic startups, but these do feel more major than most. It could be the start of a consolidation/cooling wave in the āhotā/āGPU-richā startup area, but weāre not quiiiite ready to call it yet. Consider us on alert though.
Table of Contents
[TOC]
PART X: AI Twitter Recap
all recaps done by Claude 3 Opus, best of 4 runs
Microsoft AI
- @mustafasuleyman: Mustafa Suleyman is joining Microsoft as CEO of Microsoft AI, leading consumer AI products and research like Copilot, Bing and Edge (3.5M views)
- @yusuf_i_mehdi: Microsoft Azure and NVIDIAās Grace Blackwell 200 Superchip are coming together (8k views)
- @saranormous: Satya Nadella got a card from Sara Norman about Mustafa Suleyman joining Microsoft (21k views)
- @kylebrussell: Mustafa Suleyman joining Microsoft seems like a weird acquisition, perhaps for regulatory or cost reasons (13k views)
Inflection AI
- @inflectionAI: Inflection AI is announcing important updates as a new phase for the company begins (505k views)
- @vikhyatk: Inflection AI raised $1.3B at a $4B valuation less than a year ago (78k views)
- @kylebrussell: Inflection AI investors and early employees got rug pulled (38k views)
- @inflectionAI: Inflection AI 2.5 will soon be available by API to accelerate their mission of creating a personal AI for everyone (36k views)
Google DeepMind
- @GoogleDeepMind: Google DeepMind is announcing TacticAI, an AI assistant capable of offering insights to football experts on corner kicks, developed with Liverpool FC (373k views)
- @GoogleDeepMind: Eight years ago, AlphaGo made history by becoming the first AI to defeat world champion Lee Sae Dol in the game of Go (109k views)
- @GoogleDeepMind: TacticAI uses a geometric deep learning approach to tackle the problem of limited corner kick data in Premier League matches (31k views)
- @GoogleDeepMind: In a blind test, football experts found TacticAIās suggestions indistinguishable from real corners and favored its tactical advice 90% of the time (21k views)
Anthropic
- @AnthropicAI: Anthropicās Claude 3 Haiku and Claude 3 Sonnet are now generally available on Google Cloudās Vertex AI platform, with Claude 3 Opus coming soon (102k views)
- @abacaj: Anthropic cooked with the Claude 3 model (31k views)
- @abacaj: Claude gives working code based on new documentation not in training sets, unlike GPT-4 (8k views)
- @abacaj: Anthropic redeemed themselves with the Claude 3 model after their model sucked last year (4k views)
AI Safety and Risks
- @AISafetyMemes: NVIDIA announces AI nurses that are 90% cheaper than humans and outperform them at bedside manner, raising concerns about job displacement and AI progress (77k views)
- @ClementDelangue: Concentration of power is the biggest risk in AI (62k views)
- @AISafetyMemes: Zvi Moshowitz warns about the risks of AI systems like Devin that can make plans, handle obstacles, and potentially lead to recursive self-improvement (25k views)
- @abacaj: AGI in the hands of a single organization is increasingly likely and keeps Aiden Gomez up at night (25k views)
AI Benchmarks and Evaluations
- @yanndubs: AlpacaEval is now length-controlled, with 0.98 correlation to Chat Arena, simple interpretation, and robustness to length gamification (30k views)
- @deliprao: Grokās performance doesnāt seem better than Mixtral, which is an order of magnitude smaller (9k views)
- @yanndubs: AlpacaEval uses length control by default but still shows non-LC metrics (547 views)
- @deliprao: LLM development today is not as efficient as it could be due to underinvestment in novel benchmarking methods (894 views)
AI Assistants and Agents
- @aryxnsharma: Induced AI is opening early access to the first public and free autonomous web agent API (15k views)
- @aryxnsharma: A browser agent is a gateway to building a full digital remote worker (2k views)
- @yoheinakajima: Yohei Nakajimaās agent logs internal function calls with time, type, name, arguments and results (1.4k views)
- @aryxnsharma: An early Induced AI user built a macOS menu bar using their agent API (497 views)
AI Coding Assistants
- @svpino: In the future, coding may become a hobby as building software will be more about problem-solving and thinking than writing code (93k views)
- @LangChainAI: LangChain and Modal are hosting a webinar on building code agents in production without pain (8.5k views)
- @corbtt: Next-gen coding assistants will operate at the level of abstraction of Amazonās āsix-pagerā detailed design docs (537 views)
- @rameerez: Ramiro Berrelleza made a GPT for Hotwire docs by downloading them in Markdown and feeding into GPT (234 views)
AI Avatars and Video
- @suno_ai_: Suno AI is releasing V3 of their audio generation model (197k views)
- @AravSrinivas: Suno AI is the next AI unicorn, with amazing iteration speed, sound quality, and increasingly being used over Spotify (327k views)
- @synthesiaIO: Synthesia was named one of Fast Companyās Most Innovative Companies for their AI video and avatar technology (1k views)
- @AssemblyAI: New AI-powered content creation tools are coming out frequently, including for video, podcasting, avatars and more (812 views)
Memes and Humor
- @nearcyan: Underused strategy in life meme (417k views)
- @AISafetyMemes: āItās magicā meme about AI (19k views)
- @DeepLearningAI: Meme about AI-generated videos not looking cool, originally from /ProgrammerHumor on Reddit (7k views)
- @AISafetyMemes: Meme about an AI-generated death metal song based on Claudeās messages (7k views)
PART 0: Summary of Summaries of Summaries
we are concluding that Claude Opus is just the best model for top level summaries so weāre discontinuing the A/B/C tests (see archives for our struggles/record). Weāll be exposing parallel runs for all 3 + more models (incl Gemini 1.5!!) as this problem is topologically similar to our personalization app weāll be launching.
-
Grok-1 Release Sparks Debate: The open-source release of Grok-1, a 314B parameter Mixture-of-Experts model from xAI under the Apache 2.0 license, has generated significant discussion. While some are excited about its potential, others question its practicality given the immense GPU requirements for inference. Comparisons are drawn to other large models like GPT-4 and Claude, with speculation about Grok-1ās training data and architecture choices.
-
Nvidiaās Next-Gen GPUs Hint at AI Advances: Rumors about Nvidiaās upcoming GeForce RTX 50-series āBlackwellā graphics cards implementing GDDR7 memory at 28 Gbps have piqued interest, as detailed in a TechPowerUp article. Discussions also touched on CEO Jensen Huangās GTC keynote potentially confirming GPT-4ās 1.8 trillion parameter Mixture-of-Experts architecture.
-
Attention Mechanisms Under the Microscope: Deep dives into attention mechanisms occurred, particularly around memory scaling properties in FlashAttention and RingAttention. A paper on Striped Attention was shared, proposing a method to improve workload balance in causal transformer models. Fundamental motivations behind attention, like overcoming fixed-length encoding limitations, were also clarified.
-
Prompt Engineering Advancements: New tools for prompt engineering were introduced, such as Prodigy by Explosion and the open-source PromptTools. Discussions emphasized treating prompt engineering as a data annotation problem and the importance of version management and model response comparison features. Innovative experiments in content translation using different personas with GPT-3.5-turbo were also shared.
-
Photonics Chips Promise Faster AI: A YouTube video highlighted breakthroughs in photonics chips claimed to be 1000 times faster, with insights from Lightmatter, a photonic supercomputer company. The potential impact on AI acceleration and high-performance computing was discussed.
PART 1: High level Discord summaries
Stability.ai (Stable Diffusion) Discord
-
Rolling Out the 3D Future with Stability AI: Stability.ai has announced Stable Video 3D (SV3D), elevating the generative model landscape with abilities to produce high-quality multi-view images and 3D meshes from single object images. Two variants, SV3D_u and SV3D_p are released, with SV3D_u specialized in generating orbital videos and the latter extending these capabilities, as detailed in the official announcement.
-
Anticipation Builds for SD3: The community awaits the distribution of SD3 invites, amid speculations of an April release; discussions involve updates and partnerships with blockchain companies.
-
Techies Tackle AI Model Quirks: Participants shared experiences and technical challenges of running different Stable Diffusion models, citing inefficiencies and issues requiring AVX instruction sets, and engaged in troubleshooting and optimizations for various hardware setups.
-
Serialization Safety and Security: Engineers sought advice on converting
.pt
files to SAFETENSORS format, referencing official Python documentation about the security concerns associated with object serialization using Pickle, as seen here. -
Debating Openness in AI Licensing: A discussion unfolded around the commercial use and open-source licensing of AI models, with some community members expressing a laissez-faire attitude towards ownership once a project is open-sourced.
Perplexity AI Discord
- Perplexity Unchains Pro Users: Perplexity Pro users can now enjoy unlimited daily queries with Claude 3 Opus, raising debates about the marketing terms like āunlimitedā and whether they may be misleading.
- Is āUnlimitedā a Myth?: Engineers scrutinize the āunlimitedā claims by Perplexity for potentially misleading customers. Thereās a split opinion on the actual usage limits, and discussions touch upon the legal ramifications and company growth comparisons with Gemini Pro 1.5.
- AI as a Household Teacher: Some are exploring using AI models, such as Claude 3 Opus, in simple explanations for educational purposes with children, while others shared their experiences with AI tools like ChatGPT Pro for promoting learning.
- Technical Inconsistencies and Future Tech Queries: Inconsistences in the Perplexity API behavior, such as varying responses and expectations around model deprecation, sparked discussion. Queries were also made about integrating open-source models like Grok and image vision support in the API.
- Perplexityās Varied Use Cases and Visibility: The Perplexity AI is being used for diverse purposes, including creative writing, vulnerability management, and technology investigations. A gentle nudge was given about sharing content correctly in the Discord guide to maintain community visibility.
Unsloth AI (Daniel Han) Discord
-
GitHub Stardom for Unsloth AI: Unsloth AI revealed itās trending on GitHub this week, boasting notable performance features with promises of 2-5X faster speeds and 70% less memory usage for QLoRA & LoRA finetuning. The community is nudged to star the project at Unsloth GitHub.
-
Grok-1 Critique by the Community: The release of Grok-1 by xAI with 314 billion parameters raised eyebrows over its practicality, given the steep GPU requirements for inference. Curiosities flare up on whether the motivation behind such a hefty model is technical advancement or a marketing strategy influenced by Elon Muskās visionary approach.
-
Fine-Tuning Epochs and Model Size Discussions: Discord discussions turned to fine-tuning epochs, especially in the context of voice neural networks vs. LLMs, with a consensus hinting that 3 epochs could be the standard to prevent style overfitting. When matching trainable parameters to the data, a rank of 32 or 64 was recommended for datasets with about 800,000 lines.
-
Mistral Versus Gemma for Specific AI Tasks: A detailed comparison was drawn between Mistral-7b and Gemma 7b, pointing out their proficiency in domain-specific classification tasks. Technical issues related to using Huggingface Inference API with 4-bit quantization were also addressed, suggesting the use of Unslothās Colab notebook for successful executions.
-
Exploration of Unsloth AI Capabilities and Support: The community was active in discussing the ins and outs of model fine-tuning, conversion, and deployment using Unsloth, with a focus on open-source models like Mistral. It affirmed that while full fine-tuning isnāt currently Unslothās emphasis, it remains a possibility in future development. Moreover, it was made clear that the
eval_steps
parameter impacts the frequency of evaluations during fine-tuning, but not necessarily the outcome. -
Poetic and Philosophical Musings in Community: In a lighter mood, the Unsloth AI guild shared a poem titled āAn Appeal to A Monkeyā, exploring themes of simplicity in our complex world and maintaining a conversational dynamic inclusive of intuitive learning examples from youngsters and serendipitous username encounters.
LM Studio Discord
-
New Kid on the Block - Command-R 35B: Support for the Command-R 35B model in LM Studio is expected soon, with a pending merge of llama.cpp pull request #6033. However, thereās an unresolved compatibility issue with llama.cpp not supporting c4ai yet, detailed here.
-
Grok-1ās Open-Source Debate: The release of Grok-1, a 314B parameter model, sparked discussion on its practicality for local use, with skepticism over the functionality without further tuning. More information can be found on the xAI blog.
-
GPU Tug-of-War - Budget vs. Performance: Conversations on hardware for running Language Models ranged from considering second-hand GPUs like the 3090 to using MacBooks. Users interested in purchasing Epyc CPUs and older Teslas like the K80 discussed the technicalities and challenges involved.
-
AMDās ROCm Hurdles: A link to prebuilt Windows ROCm libraries for gfx1031 and gfx1032 was shared (GitHub - brknsoul/ROCmLibs). However, the AMD 6700 xt lacks official support for ROCm, presenting compatibility issues with LM Studio.
-
LM Studio Configurations at Your Fingertips: Users seeking model configuration presets for LM Studio can access a collection of JSON files at GitHub - lmstudio-ai/configs, while discussions continued about the functionality across various LM Studio versions and the call for a JSON function-calling implementation on the Local Inference Server was left unaddressed.
Nous Research AI Discord
-
RTX 50 āBlackwellā Opts for Slower GDDR7: NVIDIAās anticipated GeForce RTX 50-series āBlackwellā graphics cards will use GDDR7 memory at 28 Gbps, not maxing out the 32 Gbps capability. Details reflect a conservative approach akin to the RTX 20-seriesā memory speeds and can be found in a TechPowerUp article.
-
Artificial Charm to Boost AI Performance: Discussions included unconventional methods like using a network of āhorny claudesā to improve mermaid diagrams, and the concept of ārizzing upā models for enhanced responses. The ethics of replicating features from other AI models also surfaced as a hot topic.
-
Elephant in the Room ā Model Scaling Costs: Conversations highlight the Herculean costs ($300,000) and practicality concerns of scaling AI models like llama-2 13b to 20b, with members showing skepticism about the incremental benefits of upscaling, despite a wishful intrigue for a 20-billion-parameter model outperforming Mistral.
-
Grok-1ās Position Debated Among AI Titans: Grok-1, with its 314B parameters, stirred discussions as being barely more potent than GPT-3.5 and less competitive compared to smaller 7B parameter models. NVIDIA CEO Jensen Huang may have inadvertently confirmed GPT-4ās 1.8 trillion parameters. Concurrently, licensing changes allowing commercial use of Yi-9B and the merits of continual pretraining of models like Mixtral fostered a debate on practical strategies for AI progression.
-
Open Source vs. Commercial AI Models: Reflections on the potential stifling of open-source AI by commercially restrictive giants, suggesting a future where entities may customize large language models to embody their core values. This discourse envisages a robust base furnished by open-source models for ongoing development and expansion in AI systems.
-
Modeling Perplexity and Pruned Prospects: Technical exchanges included the perplexities of calculating perplexity for Llama2 and creating pruned versions of large models, such as Smallstral, a downscaled take on Mistral-7B-v0.1. Partnership efforts like Bittensor faced hurdles with chain issues and the pursuit of transactional tokens (tao) for network activities.
-
RAGās Desirable Futures and Modes: A vision for RAG (Retrieval-Augmented Generation) models suggests markdown-like outputs and advanced reasoning capabilities. Members discussed balance in model responses, toggling between relying on internal knowledge and utilizing an exclusive external context or āRAG mode.ā Incorporating external context, perhaps through Cohereās span highlighting, was mused as a means to enhance RAG utility.
Eleuther Discord
-
Ivy League Courses at Your Fingertips: Ivy League courses are now openly accessible online, with a standout mention of Stanfordās CS videos on NLP. The technical community praised this democratization of high-tier educational resources.
-
Gizmos and Gadgets Galore: Discussions have highlighted intriguing tech such as the Devin AI software engineer and Figure 01, an OpenAI-assisted robot. Additionally, the AI ThoughtStream concept was mentioned alongside Self-Taught Reasoner (STaR), with further insights found in the related paper.
-
Grok This: The release of Grok-1 on GitHub has spurred debates comparing it to the likes of Claude and GPT-4, underscoring a need for independent benchmarks and finetuning insights. Concerns over AI model evaluation metrics were raised, emphasizing the use of EleutherAIās cookbook for calculating TFLOPs.
-
Economy of Scale: Discussions within the guild touched on affordability, such as AI paper submissions to TMLR, a free option for conference submissions, and economical training times for models like Pythia-70M. General consensus indicated NVIDIAās performance figures and research, like Optimizing Distributed Training on Frontier for Large Language Models, serve as valuable resources for hypothetical time estimates.
-
Dataset Complexityās Impact on Language Models: In the #scaling-laws channel, members discussed language model scaling laws, dataset complexity, and the importance of selection criteria such as intrinsic entropy and lexical density for efficient pretraining practice.
-
Sampling Strings, Served Up: Inquiry about sampling from n-gram distributions led to explanations of autoregressive and hierarchical sampling methods, with tools and code examples found at the GitHub link and on the Wikipedia page for n-gram language models.
-
Harnessing the Chaos of LLM Evaluation: Numerous issues around the usage and integration of lm-eval-harness were addressed in the #lm-thunderdome channel, including concerns of default model selection behavior, evaluation benchmark consistency, and deadlocks. Users were informed that lm-eval version 0.4.2 is available on PyPI and were encouraged to assist with PR reviews.
-
Shuffle or Not to Shuffle The Pile: In #gpt-neox-dev, clarity was sought on The Pile datasetās shuffling. While original files were not shuffled, it was clarified that the preprocessed data on HF, used by Pythia, is indeed shuffled. However, individual components might remain chronologically organized.
OpenAI Discord
-
AI Ponders its Own Understanding: Engineers debate on whether AIs possess a form of understanding or solely predict sequences of words. They draw parallels between emergent AI behaviors and human cognitive processes, questioning if AI can ever parallel human sensory experiences.
-
Debating AI Consciousness: In another philosophical turn, consciousness is under the AI lens, teasing apart the idea that AI may require less effort to interpret because it lacks human social drives for status or wealth, with implications for emotive communication.
-
GPT-4 vs. Claude-3 Throwdown: Within discussions regarding performance, Claude-3 is praised for conversational prowess whereas GPT-4 is commended for programming capabilities. Engineers are also awaiting new model releases or rollback options to earlier GPT versions, while some are troubleshooting technical issues and seeking assistance at help.openai.com.
-
GPT-5 Release Rumblings and API Advances: The community is curious about the release of GPT-5 and the methods to enable web search functionalities within GPT APIs. Theyāre also exploring creating AI chatbots on mobile platforms and addressing the overzealous nature of GPT filter sensitivities.
-
Prompting a Better Future in AI: Engineers are sharing strategies to combat AIās task refusals and optimize prompt structures, particularly for classification tasks with GPT models. The dialogue includes how to handle major shifts in response patterns from platforms like ChatGPT and encourages methodical testing of prompts to balance recall with false positives.
HuggingFace Discord
-
Multi-GPU Tuning Torment: An engineer sought advice on fine-tuning cross-encoder models with multiple GPUs, pondering the necessary parameter adjustments for scaled hardware.
-
Aya Aces Up Its Sleeve: The Aya demo has been enhanced with a higher repetition penalty, and thereās a call-to-action for developers to add a slider feature to the Gradio interface.
-
H100 GPU Commands Attention: NVIDIAās H100 GPU, with its hefty power consumption of up to 850W and integration of server CPU and GPU on a single board, captivated discussions, despite concerns about its long lead times and the challenge of accommodating its power needs in modern supercomputer racks.
-
Grok-1 Stuns with Size and Speed: The newly released Grok-1, a 314 billion parameter MoE model under Apache 2.0 license, surprised many with its gargantuan size and the speed at which it landed on HuggingFace.
-
Hunger for HuggingFace Know-How: Beginners and experts alike exchanged resources for learning HuggingFace with links to the platformās NLP course and the acclaimed Stanford NLP textbook, while someone else inquired about accessible, deployable free LLM APIs like ollama for production deployment.
LlamaIndex Discord
- Rethinking Retrieval with Interactivity: A tweet introduced a novel approach to RAG by treating documents as interactive elements rather than static texts to refine responses during retrievals.
- LlamaIndex Updates and Integrations: Version 0.10.20 of LlamaIndex now includes an Instrumentation module, which along with a partnership with @langfuse, brings added observability to RAG pipelines as elaborated in this link and example notebooks.
- Search-in-the-Chain for Enhancing QA: A paper by Shicheng Xu et al. suggested āSearch-in-the-Chainā to improve question-answering by interleaving retrieval with planning and replanning, further discussed in this tweet.
- Chaining OpenAI Agents and Query Tools: Discussion centered on chaining multiple OpenAI agents via LlamaIndex, with references to
FunctionTool
andQueryEngineTool
and challenges encountered, though specifics like error messages werenāt disclosed. - RAG and HuggingFace Model Orchestration: Users sought advice on document preparation for RAG with Pinecone, and discussed integrating RAPTOR pack with HuggingFace models instead of OpenAIās, accompanied by shared code from the
llama_index
GitHub repo.
Latent Space Discord
Yann LeCunās Visual vs Linguistic Reasoning Debate Heats Up: LeCun theorizes that visual models may have an edge over language-focused ones due to the ability to map directly to actions, a view attributed to his lack of an inner monologue as indicated in tweets and an interview.
Revelations in Resolution: The UPSCALER tool by Scene-scenario is shaking up the image enhancement market with promises of 10k resolution uplifts, as shared in a tweet, potentially prompting Magnific to reconsider its pricing structure.
Grok-1ās Grand Entrance: xAIās Grok-1 with 314 billion parameters is released under Apache 2.0, spurring discussions on its potential compared to other models, as seen on Grok-1ās release page and various Twitter threads.
Paper Club Dives Deep Into Large Language Models: Enthusiasts in the #llm-paper-club-west channel discuss the mechanics and efficiency of attention mechanisms in transformers, helpful for understanding the design and scalability of current LLMs.
AI In Action Club Strategizes Learning: The #ai-in-action-club showcases structured AI discussions using a shared Google spreadsheet, offering insights on contrastive embeddings and suggesting LLMs for improved vector comparisons.
LAION Discord
-
Free Jupyter in Copilot, DALL-E Dataset Moves: Microsoft Copilot Pro subscribers can now freely access Jupyter Notebooks with libraries like simpy and matplotlib, akin to the ChatGPT Plus offering. The DALL-E 3 dataset has moved to a new Hugging Face repository, available for use with the
load_dataset()
function, ensuring reproducibility. -
PyTorch and xformers Align: Engineers are tackling integration issues between xformers and PyTorch, with discussions pointing towards solutions like virtual environments and installing from a PyTorch-index URL for compatible versions.
-
Metadata Magic Boosts AI Captioning: The use of metadata in captioning prompts has been touted for improving AI-generated text, as demonstrated in an example script EveryDream2trainerās caption_cog on GitHub.
-
Vast.AI Scrutinized for Security: Conversations about Vast.AI focused on its lack of adherence to strict security protocols, such as HIPAA. Participants recommend using major cloud providers for sensitive tasks due to the inherent root access risk with Vast.AIās SSH container setup.
-
Beyond Free Colab Constraints & Clarifications: Colabās limitations on web UIs have been clarified, noting only the free version is incompatible. Additionally, a Google Doc detailing a Generative Audio Video Text world model, research on continuous pretraining of LLMs, and the Grok open release repo were mentioned, but specific community reactions were not detailed. Rumors about Nvidiaās GPT-4 sporting a MoE architecture with 1.8 trillion parameters were discussed, yet conclusive identification with GPT-4 was unconfirmed.
OpenAccess AI Collective (axolotl) Discord
-
Axolotl Takes LLM Training Further: Engineers discussed Axolotl as an efficient alternative to direct transformers code for fine-tuning models, using YAML files for configurations. There is interest in ScatterMoE optimizations (ScatterMoE repo) and how it might interplay with the hefty Grok model weightsāalthough the performance underwhelms given Grokās billed size, and attention is now on potential qLoRA FSDP handling (Grok GitHub).
-
Quantization and Next-Gen Hardware Stir Excitement: A spirited debate highlighted the use of AQML for extreme model compression (AQML GitHub). Moreover, the upcoming Nvidia RTX 5000 series is rumored to bring VRAM and bandwidth enhancements, which could revolutionize AI training (TechPowerUp article).
-
Dataset Dilemmas and Custom Solutions: LLM users encountered tokenizer foibles and
HFValidationError
s due to incorrect path specifications. They sought advice on constructing custom completion datasets, with guidance to usejsonl
files. Furthermore, engineers are exploring NVIDIA NeMo-Curator, a toolkit for data curation (NeMo-Curator GitHub), while model merging potentials are being assessed for solutions like Mistral. -
Conversation on Model Compatibility and Performance: The guild has been tackling configuration concerns, seen in a user struggling with local datasets pointing for fine-tuning, coupled with a discovery of evaluation bugs potentially misleading epoch-dependent validation. In model development, compatibility is key, as evidenced by a strategy explored for merging models with uniform training formats.
-
Seeking Pathways for Adapter-Based Reinforcement: A query was raised about the feasibility of employing distinct LoRa adapters to engage in DPO (differentiable policy optimization) on separate models, indicating an exploration into more nuanced and model-specific reinforcement learning methodologies.
CUDA MODE Discord
Photonics Chips Blaze New Paths: A YouTube video titled āNew Chip Breakthrough: x1000 fasterā introduces photonic chips claimed to offer a 1000x performance increase, presenting insights from Lightmatterās advancements in photonic supercomputing.
Triton Trounces CUDA Puzzles: A new set of challenging Triton Puzzles was released to help users sharpen their skills, and a visualizer for Triton debugging was launched to simplify the understanding of complex load/store functions.
CUDA Community Courts Optimizations: Discussions in the CUDA channel ran deep into warp schedulers, active warps, and memories management, indicating the collectiveās drive to maximize CUDA efficiency and sharing insights on project structuring for better performance.
New Strides in Machine Learning Hardware: The research group led by Prof. Mohamed Abdelfattah at Cornell University is highlighted for their work in reconfigurable computing and efficient machine learning, with an accompanying masterās level course ECE 5545 (CS 5775) detailed that dives into ML optimization for hardware systems.
Ring Flash Attention Clears the Air: Extensive deliberations occurred around memory requirements for attention mechanisms like FlashAttention and RingAttention, featuring knowledge sharing and a look at Striped Attentionās stride towards better workloads described in an associated paper.
MLSys 2024: Where Machine Learning Meets Systems: Enthusiasm builds up for the MLSys 2024 conference in May, focusing on the merging of machine learning with systems, inviting an interdisciplinary approach to pushing the boundaries of AI efficiency.
OpenRouter (Alex Atallah) Discord
- Llama Model Prompt Compatibility Confirmed: A simple āYes!ā confirmed that llama models can work with JSON structures containing keys such as āsystemā, āuserā, and āassistantā when interfacing with the OpenAI JavaScript library.
- Top-Up Tips for AI Chatbots: Users discussed payment methods for chatbot models, with the recommendation to ātopup your balanceā in situations where a credit card isnāt connected.
- Sonnet Soars in Roleplay: Sonnet was highlighted as the most consistent AI for roleplay, significantly outperforming others by avoiding repetition and randomness.
- Chat with Context: Best practices for effective prompt formation were shared, involving ājailbreakingā the model by including system context within user or assistant messages.
- Book Breakdown to Prompts: lzlv 70B was mentioned as giving better and more consistent prompt outputs when a script is used to analyze book segments, outdoing other models which may give irrelevant results.
Links mentioned:
- OpenRouter: A router for LLMs and other AI models
- Groking GitHub: An open release repository for contributions on GitHub.
LangChain AI Discord
-
Choose Wisely Between
astream_log
andastream_events
: A guild member debated the longevity ofastream_log
versusastream_events
, with concern about possible deprecation asastream_events
is still in beta. Meanwhile, an advanced research assistant and search engine project leveraging models like Claude 3 Opus and GPT-4 Turbo was announced, with two monthsā free premium access for beta testers at Rubikās AI. -
LangChain Documentation Confuses Newbies: Addressing the complexity of LangChain documentation, members like namenot223_69478 found it difficult for beginners, prompting recommendations to dive into code post-basics. Specific feedback was solicited to streamline the documentation experience.
-
JavaScript vs. Python in RemoteRunnable: Dissimilar outcomes when using LangChainās RemoteRunnable in JavaScript compared to Python were reported, with JavaScript failing to call
/stream
. Despite this snag, no recent updates were found to address streaming issues in JavaScript. -
AI Enthusiasts Unveil Chatbots and Tools: An array of projects surfaced in the guildās discussions, such as a chatbot for data analysis named langchain-chatbot, a Discord AI for managing bookmarks living-bookmarks, an invitation to provide insights for a health and productivity digital advisor, an AI Python scraper named Scrapegraph-ai, and a sales and research AI called lyzr-automata.
-
Tutorial Worthy AI Projects Spotlight: The Nutriheal app illustrated the versatility of AI with tools like Ollama and Pebblo, as shown in a 15-minute YouTube demo āMaking an AI application in 15 minutesā. Meanwhile, a YouTube tutorial titled āPlan-and-Execute using Langgraphā walked through building a plan-and-execute style agent.
Interconnects (Nathan Lambert) Discord
-
Api-gate: Leaking LLM Secrets Unveiled: Researchers revealed that API calls can expose details of commercial LLMs, exploiting the softmax bottleneck to gain insights into proprietary models. These findings show feasible methods to estimate an LLMās hidden size and revive concerns over API-protected model security.
-
Billion-Dollar Question: How Big is GPT-3.5? A study estimates OpenAIās gpt-3.5-turbo could be around 7B parameters, sparking debates on the accuracy of this figure, especially against the backdrop of Mixture of Experts (MoE) models possibly defying such straightforward size estimations.
-
Drama Looms Over Open Source Definitions: A post by Sebastian Raschka hints at upcoming contention in the machine learning sphere, centering on OSS community standards and the definition of open-source, amid contrasts between Apache 2.0 and GPLv3 licenses.
-
Behemoth Grok-1 Open Sourced: Amidst community astonishment, Grok-1 Model, a 314B parameter MoE model, has been openly released under Apache 2.0, sparking analysis comparing its benchmarks with other models like Falcon and discussions on distribution methods.
-
Global Model Mail?: In response to the unconventional distribution of Grok-1, members jest about the logistics of shipping physical drives with model weights versus using cloud services, reflecting on the challenges and costs of making large models widely accessible.
Alignment Lab AI Discord
-
Aribus Enthusiasm Meets Confusion: An Aribus Twitter post sparked interest and discussion, but also confusion among members regarding its potential applications.
-
Hunt for HTTP Savvy Embeddings: A dialogue surfaced regarding the search for an embeddings model tailored to HTTP responses, and whether transformer models could fulfill this niche role.
-
Callout for Custom-Tuned Mistral: A request circulated for a Mistral model fine-tuned on specific datasets, including the
orca-math-word-problems-200k
andnvidia/OpenMathInstruct-1
. -
Grok 1: Resource-Hungry Goliath: Conversations centered around the hefty resource requirements for training and fine-tuning Grok 1, and the fine line between its high costs and performance outcomes when compared to leading models like GPT-4 and Claude.
-
MoE Training Hits Efficiency High: There was talk of an efficient MoE (Mixture of Experts) training infrastructure, although constrained by the availability of computational resources.
LLM Perf Enthusiasts AI Discord
- The Lazy Developerās Motto: A guild member cited Devin as inspiration for a minimalist approach in app development, advocating for simplicity and voicing dissatisfaction with the existing open-source options for tasks more complicated than a local app with filesystem control.
- Anthropic Accusations in Tweets: A tweet by @tszzl sparked a discussion by suggesting that Anthropic could be playing the role of ācontrolled oppositionā. Members debated the motive and veracity of this controversial claim.
- Concerns Over KPU vs. GPT-4 Comparisons: The announcement of the Knowledge Processing Unit (KPU) by Maisa led to skepticism within the guild about their benchmarking methods, especially when comparing the KPU-enhanced GPT-4 to a non-turbo version of GPT-4. Latency concerns for practical applications were also raised.
- Deciphering KPU Architecture: Misconceptions exist about the KPUās capabilities and architecture, with some believing it includes innovative context window management and self-evaluation techniques. Maisaās CEO, @davipar, clarified on Twitter that KPU acts like a GPU for knowledge management and provided an API key for independent evaluations and a notebook.
- Claude Sonnet Scales Up: Inquiry into Claude Sonnetās performance at scale was mentioned, specifically relating to projects with high token volume, indicating a search for efficient, large-scale LLMs.
DiscoResearch Discord
-
DiscoLM has a German Language Block: The DiscoLM-mixtral-8x7b-v2 model struggles to generate German responses, with a similar issue occurring during sequence classification fine-tuning resulting in a ValueError. This extends to other German language models, as feedback on Reddit highlights performance comparisons, for instance, between SauerkrautLM Una SOLAR Instruct and Starling LM 7B Alpha (Reddit Feedback).
-
DiscoLM-70b Hits a Snag: An attempt to run DiscoLM-70b locally using
vllm
met with issues, suggesting the possibility of compatibility problems despite being on a resource-rich machine. -
Grokking the Durability of Grok: The Grok model, boasting 314B parameters, was shared in a GitHub link, with discussions around the practicality of running it considering it only requires 300GB of memory.
-
Benchmarking the German Language Models: Conversations covered the need for better benchmarks for German language models, referencing a potentially influential paper titled supergleber-german-language-evaluation-benchmark, and hinted at a collaboration with university research, with a nod to the Informatik Uni Würzburg.
-
Server Migration Not a Piece of Cake: The shift of a demo server from a home setup to a professional facility has been marred by networking glitches, with the hope to get issues ironed out by the following week. Comparatively, machines hosted at home were said to encounter fewer issues, adding a humorous take on the server reliability debate.
Datasette - LLM (@SimonW) Discord
-
Prompt Engineering Gets a Boost: Prodigy by Explosion integrates new tools for prompt engineering, aiming to streamline the process for engineers. The tools focus on treating prompt engineering as a data annotation challenge, though practical applications may see limitations.
-
Open-source and SDK Aids for Prompt Experimentation: Engineers discussed various resources for prompt testing such as the open-source PromptTools and Vercel AI SDK, each offering different user experiences but highlighted for lack of version management and ease of model response comparison, respectively.
-
Helicone AI Enters the Prompt Management Arena: Helicone AI is emerging as a potential all-in-one solution for prompt management, starting to develop prompt management and analysis tools that could be pivotal in handling complex AI tasks.
-
Exploring Multilingual Persona-Based Content Translation: A shared blog post detailed an innovative experiment in utilizing GPT-3.5-turbo for translating content through different personas, providing insights into the flexibility of language models in content creation.
-
In Search of a Seed: A single query posed by a user touches on the recoverability of seeds used by OpenAI models in past API requests, pointing towards an interest in reproducibility and control over the randomness in model outputs.
Skunkworks AI Discord
-
New Training Method Boost on the Horizon: baptistelqt is finalizing a paper on a new training method that enhances global accuracy and sample efficiency, evidenced by improved test accuracy on VGG16 from 0.04 to 0.1 in just one epoch. The methodās scalability to larger models is currently untested due to resource constraints.
-
Big Model Woes Meet Generosity: baptistelqtās concerns about the need for more computational power to test their method on larger models were met with an offer from far_el. who expressed interest and proposed discussing resource allocation.
-
Team-Up for Thoughtful Language Models: satyum showed interest in the āQuiet-STaRā project, which promotes the idea that language models can āthink before they speak,ā with qualifications in PyTorch and transformer architectures welcomed for potential collaborators.
-
Off-Topic Diversions: Members also shared non-technical content, such as a YouTube video shared by pradeep1148, which does not contribute to the technical or engineering discussions.
[N.B.: The shared YouTube video from the #off-topic channel did not contain enough context to assess its relevance to the technical discussions.]
PART 2: Detailed by-Channel summaries and links
Stability.ai (Stable Diffusion) ā· #announcements (1 messages):
-
Introducing Stable Video 3D: Stability.ai has released Stable Video 3D (SV3D), a generative model that is an advancement over the earlier Stable Video Diffusion. SV3D creates high-quality multi-view images from a single object image and can generate 3D meshes, surpassing previous models like Stable Zero123 and Zero123-XL. Learn more
-
Two Variants of SV3D Released: The release introduces two variants, SV3D_u and SV3D_p. SV3D_u generates orbital videos from single image inputs, while SV3D_p extends these capabilities, though the message is truncated before further details are provided.
Link mentioned: Introducing Stable Video 3D: Quality Novel View Synthesis and 3D Generation from Single Images ā Stability AI: When we released Stable Video Diffusion, we highlighted the versatility of our video model across various applications. Building upon this foundation, we are excited to release Stable Video 3D. This nā¦
Stability.ai (Stable Diffusion) ā· #general-chat (988 messagesš„š„š„):
-
Stability AI News and SD3 Anticipation: Users are eagerly waiting for SD3 invites to be distributed and shared their predictions about the potential release in April. Discussions centered on updates including the announcement of Stable Video 3D, previous models, and partnerships with blockchain companies, as mentioned by stability.ai/news.
-
Technical Challenges and Fixes in SD: Users discussed technical issues with running different versions of AI models, mentioning inefficient code, specific needs such as AVX instruction sets, and dissatisfaction with certain model behaviors. They also shared experiences with running models like Stable Cascade and SDXL on consumer-grade hardware.
-
Pickle, PT Files, and SAFETENSORS: Users sought assistance with converting
.pt
files to SAFETENSORS. It was mentioned that most.pt
files are safe to use and linked to official Python documentation warning about the security risks associated with Pikcle, a method of object serialization in Python. -
Concerns about Commercial Use and Open Source Licensing: There was a discussion about the licenses associated with various models, especially the right to use them commercially. Some users expressed indifference to open-source licenses, implying that if something is open-sourced, it no longer belongs to the creator.
-
Generating AI Art and Running on Low-End PCs: Users inquired about how to use Stable Diffusion for specific tasks like coloring line art without altering style and how to run AI models on PCs with low VRAM. Recommendations included using specific controlnets and trying services like civitai for cloud-based generation.
Links mentioned:
- grok-1: Grok-1 is a 314B parameter Mixture of Experts model - Base model (not finetuned) - 8 experts (2 active) - 86B active parameters - Apache 2.0 license - Code: - Happy coding! p.s. we re hiring:
- Avatar Cuddle GIF - Avatar Cuddle Hungry - Discover & Share GIFs: Click to view the GIF
- coqui/XTTS-v2 Ā· Hugging Face: no description found
- PollyannaIn4D (Pollyanna): no description found
- Iron Man Mr Clean GIF - Iron Man Mr Clean Mop - Discover & Share GIFs: Click to view the GIF
- Introducing Stable Video 3D: Quality Novel View Synthesis and 3D Generation from Single Images ā Stability AI: When we released Stable Video Diffusion, we highlighted the versatility of our video model across various applications. Building upon this foundation, we are excited to release Stable Video 3D. This n...
- Yess GIF - Yess Yes - Discover & Share GIFs: Click to view the GIF
- NVLink | pny.com: no description found
- Stable Video Diffusion - SVD - img2vid-xt-1.1 | Stable Diffusion Checkpoint | Civitai: Check out our quickstart Guide! https://education.civitai.com/quickstart-guide-to-stable-video-diffusion/ The base img2vid model was trained to gen...
- Reddit - Dive into anything: no description found
- The Complicator's Gloves: Good software is constantly under attack on several fronts. First, there are The Amateurs who somehow manage to land that hefty contract despite having only finished "Programming for Dummies"...
- pickle ā Python object serialization: Source code: Lib/pickle.py The pickle module implements binary protocols for serializing and de-serializing a Python object structure. āPicklingā is the process whereby a Python object hierarchy is...
- Page Not Found | pny.com: no description found
- WKUK - Anarchy [HD]: Economic ignorance at its most comical.ā "Freedom, Inequality, Primitivism, and the Division of Labor" by Murray Rothbard (http://mises.org/daily/3009).ā "Th...
- Vancouver, Canada 1907 (New Version) in Color [VFX,60fps, Remastered] w/sound design added: I colorized , restored and I added a sky visual effect and created a sound design for this video of Vancouver, Canada 1907, Filmed from the streetcar, these ...
- GitHub - DiffusionDalmation/pt_to_safetensors_converter_notebook: This is a notebook for converting Stable Diffusion embeddings from .pt to safetensors format.: This is a notebook for converting Stable Diffusion embeddings from .pt to safetensors format. - DiffusionDalmation/pt_to_safetensors_converter_notebook
- Proteus-RunDiffusion - withoutclip | Stable Diffusion Checkpoint | Civitai: Introducing Proteus-RunDiffusion In the development of Proteus-RunDiffusion, our team embarked on an exploratory project aimed at advancing the cap...
- ŠŠ¾ŃŠ¾ŃŠŗŠ¾Š¼ŠµŃŃŠ°Š¶Š½Ńй Š¼ŃŠ»ŃŃŃŠøŠ»Ńм "ŠŠ°ŃŠŗ" (ŃŠ“елан Š½ŠµŠ¹ŃоŃеŃŃŠ¼Šø): ŠŠ¾ŃŠ¾ŃŠŗŠ¾Š¼ŠµŃŃŠ°Š¶Š½Ńй Š¼ŃŠ»ŃŃŃŠøŠ»Ńм "ŠŠ°ŃŠŗ" - Š½ŠµŠ²ŠµŃŠ¾ŃŃŠ½Š¾ ŃŠ²Š»ŠµŠŗŠ°ŃŠµŠ»ŃŠ½Ńй ŠŗŠ¾ŃоŃкомеŃŃŠ°Š¶Š½Ńй Š¼ŃŠ»ŃŃŃŠøŠ»Ńм, ŃŠ¾Š·Š“Š°Š½Š½ŃŠ¹ Ń ŠøŃŠæŠ¾Š»ŃŠ·Š¾Š²Š°Š½ŠøŠµŠ¼ Š½ŠµŠ¹ŃŠ¾ŃŠµŃŠµŠ¹.
- GitHub - Stability-AI/generative-models: Generative Models by Stability AI: Generative Models by Stability AI. Contribute to Stability-AI/generative-models development by creating an account on GitHub.
- Install ComfyUI on Mac OS (M1, M2 or M3): This video is a quick wakthrough to show how to get Comfy UI installed locally on your m1 or m2 mac. Find out more about AI Animation, and register as an AI ...
- GitHub - GraftingRayman/ComfyUI-Trajectory: Contribute to GraftingRayman/ComfyUI-Trajectory development by creating an account on GitHub.
- The Mushroom Motherboard: The Crazy Fungal Computers that Might Change Everything: Unlock the secrets of fungal computing! Discover the mind-boggling potential of fungi as living computers. From the wood-wide web to the Unconventional Compu...
- GitHub - mix1009/sdwebuiapi: Python API client for AUTOMATIC1111/stable-diffusion-webui: Python API client for AUTOMATIC1111/stable-diffusion-webui - mix1009/sdwebuiapi
- Home: Stable Diffusion web UI. Contribute to AUTOMATIC1111/stable-diffusion-webui development by creating an account on GitHub.
- GitHub - chaojie/ComfyUI-DragAnything: Contribute to chaojie/ComfyUI-DragAnything development by creating an account on GitHub.
- Regional Prompter: Control image composition in Stable Diffusion - Stable Diffusion Art: Do you know you can specify the prompts for different regions of an image? You can do that on AUTOMATIC1111 with the Regional Prompter extension.
Perplexity AI ā· #announcements (1 messages):
- Unlimited Queries for Perplexity Pro Users: Perplexity has made the number of daily queries on Claude 3 Opus unlimited for Perplexity Pro users. Celebrate the enhanced access to whatās claimed to be the best LLM in the market today!
Perplexity AI ā· #general (795 messagesš„š„š„):
- Clarifications Requested on Perplexityās Offerings: Users are asking for clarifications regarding Perplexity AIās context limits for Opus, with specific concerns about how PDFs are interpreted and how to ensure the model adheres to prompts for concise responses.
- Opinion Split on āUnlimitedā Claims: Discussion involves whether the āunlimitedā usage claimed by Perplexity is misleading. While some users find the term misleading due to the daily cap, others argue the limit is virtually unreachable for most users.
- AI Models in Education: Thereās a lively debate about the capability of AI like Claude 3 Opus in explaining complex topics in simple terms suitable for children, with anecdotes from parents actively using AI as an educational tool for their children.
- Perplexity Business and Legal Discussions: Amidst user discussions, concerns are raised about the potential legal implications of misleading wording on usage limits, and thereās speculation about Perplexityās growth and comparisons with competitors like Gemini Pro 1.5.
- Parenting Experiences Shared: Users exchange stories and opinions on parenting and childrenās learning capabilities, with mentions of using features like Perplexityās Models and ChatGPT Pro with voice interfaces to nurture curiosity and understanding in kids.
Links mentioned:
- no title found: no description found
- Tweet from Aravind Srinivas (@AravSrinivas): Yep, thanks to @elonmusk and xAI team for open-sourcing the base model for Grok. We will fine-tune it for conversational search and optimize the inference, and bring it up for all Pro users! āļø Quoti...
- Tweet from Aravind Srinivas (@AravSrinivas): We have made the number of daily queries on Claude 3 Opus (the best LLM in the market today) for Perplexity Pro users, unlimited! Enjoy!
- Appleās AI ambitions could include Google or OpenAI: Another big Apple / Google deal could be on the horizon.
- Tweet from Brivael (@BrivaelLp): Zuck just reacted to the release of Grok, and he is not really impressed. "314 billion parameter is too much. You need to have a bunch of H100, and I already buy them all" š¤£
- Shikimori Shikimoris Not Just Cute GIF - Shikimori Shikimoris Not Just Cute Shikimoris Not Just A Cutie Anime - Discover & Share GIFs: Click to view the GIF
- Nothing Perplexity Offer: Here at Nothing, weāre building a world where tech is fun again. Remember a time where every new product made you excited? Weāre bringing that back.
- Tweet from Bloomberg Technology (@technology): EXCLUSIVE: Apple is in talks to build Googleās Gemini AI engine into the iPhone in a potential blockbuster deal https://trib.al/YMYJw2K
- āļø Sam Altman on AI LLM Search: 47 seconds Ā· Clipped by Syntree Ā· Original video "Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI | Lex Fridman Podcast #419" by Le...
- What Are These Companies Hiding?: Thoughts on the Rabbit R1 and Humane Ai PinIf you'd like to support the channel, consider a Dave2D membership by clicking the āJoinā button above!http://twit...
- FCC ID 2BFB4R1 AI Companion by Rabbit Inc.: FCC ID application submitted by Rabbit Inc. for AI Companion for FCC ID 2BFB4R1. Approved Frequencies, User Manuals, Photos, and Wireless Reports.
Perplexity AI ā· #sharing (35 messagesš„):
- Creative Writing Exploration with Claude 3 Opus: Claude 3 Opus was tested on creative writing with the theme of āever increasing intelligence until itās unintelligible to humansā. The exploration can be found here.
- Visibility Matters: A reminder was issued to ensure threads are shared, using this Discord link for guidance on making content visible to others.
- Perplexity AI in Action: Members shared their Perplexity AI searches on diverse topics varying from vulnerability management to creative invention prompts. Highlighted searches included assessing cleanliness here and exploring vulnerability management here.
- Diverse Explorations with Perplexity AI: Participants engaged with Perplexity AI searching for insights on subjects such as the vulnerability of phones, Elon Musk, and even the physics of lava lamps. For instance, one search compared phones here.
- Delving into Specific Technologies: Users delved into specifics with searches relevant to current tech, including topics on Apple investments and open-source releases. One example is looking into Appleās acquisition of DarwinAI here.
Perplexity AI ā· #pplx-api (64 messagesš„š„):
- Unclear Model Deprecation Schedule: A member mentioned an expected deprecation of a model on March 15, but noted it was still active and providing distinct responses without rerouting to another model; there was uncertainty if the deprecation schedule had changed.
- Discrepancies in API Responses: Members discussed inconsistencies when using
sonar-medium-online
for news prompts; one user received different sets of information about Donald Trump via the API versus the web-browser version. - Exploring Job Search with API: Members are experimenting with API prompts to retrieve job postings and are receiving mixed results, including both specific job links and generic job search site URLs.
- Max Tokens Impact on Responses: A user inquired whether setting maxtokens would cause the model to provide incomplete responses; discussion pointed out that the model doesnāt seem to have āawarenessā of token limits, and that it may truncate long responses or not fill the limit if set too high.
- Queries About Possible Grok Integration and Image Vision Support: There were questions about whether Perplexity will support the Grok now that itās open-source, and which API model supports image vision equivalent to GPT 4 Vision; members provided updates and clarified current model capabilities.
Links mentioned:
- pplx-api: no description found
- pplx-api form: Turn data collection into an experience with Typeform. Create beautiful online forms, surveys, quizzes, and so much more. Try it for FREE.
Unsloth AI (Daniel Han) ā· #general (853 messagesš„š„š„):
- Unsloth AI Discord Discusses Grok-1: Members discussed the new Grok-1 model released by xAI, questioning its usefulness given its size of 314 billion parameters, which is so large that it requires multiple high-end GPUs to even attempt inference.
- Quantization Queries and AIKIT Integration: One user shared the addition of finetuning with unsloth as an integration into AIKIT, allowing deployment and fine-tuning of large language models with a configuration file and Docker. Another is seeking resources to understand quantization basics, like bits and the significance of 4-bit quantization.
- Concerns Over Impersonation on Discord: Thereās an alert about a scam account impersonating a user named Daniel (starsupernova), with suggestions to report the impersonator for fraudulent activities.
- Fine-tuning Discussions and Resources Shared: The community exchanged information on fine-tuning LLMs, with notebooks shared to help structure datasets for tasks like creating an instruct model and the suitable VRAM required for models like Mistral and Gemma.
- Grok-1 Performance and Architectural Decisions questioned: It was mentioned that Grok-1 doesnāt significantly outperform Llama 70B, and thereās interest in why such a large model was released, suggesting marketing or other motives by Elon Musk and his team.
Links mentioned:
- Lightning AI | Turn ideas into AI, Lightning fast: The all-in-one platform for AI development. Code together. Prototype. Train. Scale. Serve. From your browser - with zero setup. From the creators of PyTorch Lightning.
- Open Release of Grok-1: no description found
- Cosmic keystrokes: no description found
- Blog: no description found
- xai-org/grok-1 Ā· Hugging Face: no description found
- š¦ EagleX 1.7T : Soaring past LLaMA 7B 2T in both English and Multi-lang evals (RWKV-v5): A linear transformer has just cross the gold standard in transformer models, LLaMA 7B, with less tokens trained in both English and multi-lingual evals. A historical first.
- Crystalcareai/GemMoE-Beta-1 Ā· Hugging Face: no description found
- Piper TTS Spanish - a Hugging Face Space by HirCoir: no description found
- About xAI: no description found
- Announcing Grok: no description found
- Mixtral of Experts: We introduce Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model. Mixtral has the same architecture as Mistral 7B, with the difference that each layer is composed of 8 feedforward blocks (...
- Qwen/Qwen1.5-72B Ā· Hugging Face: no description found
- Unsloth Fixing Gemma bugs: Unsloth fixing Google's open-source language model Gemma.
- Introduction | AIKit: AIKit is a one-stop shop to quickly get started to host, deploy, build and fine-tune large language models (LLMs).
- Google Colaboratory: no description found
- damerajee/Llamoe-test Ā· Hugging Face: no description found
- CodeFusion: A Pre-trained Diffusion Model for Code Generation: Imagine a developer who can only change their last line of code, how often would they have to start writing a function from scratch before it is correct? Auto-regressive models for code generation fro...
- How to Fine-Tune an LLM Part 1: Preparing a Dataset for Instruction Tuning: Learn how to fine-tune an LLM on an instruction dataset! We'll cover how to format the data and train a model like Llama2, Mistral, etc. is this minimal example in (almost) pure PyTorch.
- ISLR Datasets ā šOpenHands documentation: no description found
- Tweet from Unsloth AI (@UnslothAI): Unsloth is trending on GitHub this week! šš¦„ Thanks to everyone & all the āļøStargazers for the support! Check out our repo: http://github.com/unslothai/unsloth
- Paper page - Simple linear attention language models balance the recall-throughput tradeoff: no description found
- Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI | Lex Fridman Podcast #419: Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and many other state-of-the-art AI technologies. Please support this podcast by che...
- Mistral Fine Tuning for Dummies (with 16k, 32k, 128k+ Context): Discover the secrets to effortlessly fine-tuning Language Models (LLMs) with your own data in our latest tutorial video. We dive into a cost-effective and su...
- argilla (Argilla): no description found
- transformers/src/transformers/models/mixtral/modeling_mixtral.py at main Ā· huggingface/transformers: š¤ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX. - huggingface/transformers
- teknium/GPT4-LLM-Cleaned Ā· Datasets at Hugging Face: no description found
- GitHub - mistralai/mistral-src: Reference implementation of Mistral AI 7B v0.1 model.: Reference implementation of Mistral AI 7B v0.1 model. - mistralai/mistral-src
- GitHub - jiaweizzhao/GaLore: Contribute to jiaweizzhao/GaLore development by creating an account on GitHub.
- GitHub - xai-org/grok-1: Grok open release: Grok open release. Contribute to xai-org/grok-1 development by creating an account on GitHub.
- GitHub - AI4Bharat/OpenHands: šOpenHands : Making Sign Language Recognition Accessible. | **NOTE:** No longer actively maintained. If you are interested to own this and take it forward, please raise an issue: šOpenHands : Making Sign Language Recognition Accessible. | **NOTE:** No longer actively maintained. If you are interested to own this and take it forward, please raise an issue - AI4Bharat/OpenHands
- Error when installing requirements Ā· Issue #6 Ā· xai-org/grok-1: i have installed python 3.10 and venv. Trying to "pip install -r requirements.txt" ERROR: Ignored the following versions that require a different python version: 1.6.2 Requires-Python >=3...
- GitHub - unslothai/unsloth: 2-5X faster 70% less memory QLoRA & LoRA finetuning: 2-5X faster 70% less memory QLoRA & LoRA finetuning - unslothai/unsloth
- Falcon 180B open-source language model outperforms GPT-3.5 and Llama 2: The open-source language model FalconLM offers better performance than Meta's LLaMA and can also be used commercially. Commercial use is subject to royalties if revenues exceed $1 million.
- FEAT / Optim: Add GaLore optimizer by younesbelkada Ā· Pull Request #29588 Ā· huggingface/transformers: What does this PR do? As per title, adds the GaLore optimizer from https://github.com/jiaweizzhao/GaLore Fixes: #29512 This is how I am currently testing the API: import torch import datasets from ...
- Staging PR for implimenting Phi-2 support. by cm2435 Ā· Pull Request #97 Ā· unslothai/unsloth: ā¦.org/main/getting-started/tutorials/05-layer-norm.html]
Unsloth AI (Daniel Han) ā· #announcements (1 messages):
- Unsloth AI Basking in GitHub Stardom: Unsloth AI is trending on GitHub this week, heralding a wave of support and a call to action for users to star the project. The project promises 2-5X faster performance and 70% less memory usage for QLoRA & LoRA finetuning.
Link mentioned: GitHub - unslothai/unsloth: 2-5X faster 70% less memory QLoRA & LoRA finetuning: 2-5X faster 70% less memory QLoRA & LoRA finetuning - unslothai/unsloth
Unsloth AI (Daniel Han) ā· #random (25 messagesš„):
- Serendipitous Username Encounter: A member shared an instance where they considered adopting the moniker TSAR bomba, which amusingly coincided with another userās username, prompting discussion about such coincidences.
- Childlike Intuition in the Digital Age: A discussion on how children are adept at absorbing information due to the lack of internal chatter, unlike adults who are distracted by digital devices.
- Embracing Poetic Creativity: A user shared a poem titled āAn Appeal to A Monkey,ā invoking the idea of learning from the simplicity and aliveness of monkeys in a modern complex world.
- AI Model Fine-tuning Dialogue: Conversations about the merits and performance of Mistral-7b versus Gemma 7b for domain-specific classification tasks, including updates on bug fixes and the relative strength of the models.
- Resource Sharing and Exploration: Users shared links to various projects, such as a GitHub pull request for a āMixtral branchā, a visualization of Pokemon RL agents on a shared map, and open-source UI elements at Uiverse.io.
Links mentioned:
- Pokemon Red Map RL Visualizer: no description found
- 4205 UI elements: CSS & Tailwind: no description found
- [WIP] add support for mixtral by tohrnii Ā· Pull Request #145 Ā· unslothai/unsloth: Mixtral WIP
Unsloth AI (Daniel Han) ā· #help (568 messagesš„š„š„):
-
Training Complexities and Evaluating Mistral: A member experimented with models like Mistral-7B Instruct v0.2 and encountered issues with the Huggingface Inference API not working. Another member suggested making sure that 4-bit quantization was adopted for inference and clarified the process of using Unslothās colab notebook and
eval_steps
parameter. -
Unsloth Parameters for GGUF Conversion: One member discussed issues with converting Gemma model to GGUF and received advice to execute a specific Python conversion command outside of the provided cell in the notebook.
-
Saving and Conversion Problems: A user attempting to run the ChatML notebook with a Gemma model encountered errors during the saving process. They received assistance with alternative commands to manually run the conversion in Kaggle or local environments.
-
Support for Full Fine-Tuning Discussed: Users inquired about the capabilities of Unsloth for full fine-tuning, with the understanding that it currently excels at LoRA and QLoRA. The Unsloth team explained that while full fine-tuning might be supported in the future, it is not currently the focus of Unsloth and manual editing of the codebase would be necessary.
-
Deployment and Fine-Tuning Queries: Discussion also included guidance on deploying models with Unsloth, focusing on those that are part of the open-source segment (e.g., Mistral) rather than proprietary ones like OpenAIās GPT-4. Further, there was a clarification regarding the impact of
eval_steps
on finetuning outcomes, confirming it only dictates the frequency of evaluation, not the final finetuning performance.
Links mentioned:
- ybelkada/Mixtral-8x7B-Instruct-v0.1-bnb-4bit Ā· Hugging Face: no description found
- Google Colaboratory: no description found
- Kaggle Mistral 7b Unsloth notebook: Explore and run machine learning code with Kaggle Notebooks | Using data from No attached data sources
- TinyLlama/TinyLlama-1.1B-Chat-v1.0 Ā· Hugging Face: no description found
- Hugging Face ā The AI community building the future.: no description found
- Google Colaboratory: no description found
- DPO Trainer: no description found
- unsloth/mistral-7b-instruct-v0.2-bnb-4bit Ā· Hugging Face: no description found
- Home: 2-5X faster 70% less memory QLoRA & LoRA finetuning - unslothai/unsloth
- Home: 2-5X faster 70% less memory QLoRA & LoRA finetuning - unslothai/unsloth
- Home: 2-5X faster 70% less memory QLoRA & LoRA finetuning - unslothai/unsloth
- qlora/qlora.py at main Ā· artidoro/qlora: QLoRA: Efficient Finetuning of Quantized LLMs. Contribute to artidoro/qlora development by creating an account on GitHub.
- Home: 2-5X faster 70% less memory QLoRA & LoRA finetuning - unslothai/unsloth
- Generation - GPT4All Documentation: no description found
- Home: 2-5X faster 70% less memory QLoRA & LoRA finetuning - unslothai/unsloth
- Google Colaboratory: no description found
- Unsloth: Merging 4bit and LoRA weights to 16bit...Unsloth: Will use up to 5.34 - Pastebin.com: Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
- GitHub - unslothai/unsloth: 2-5X faster 70% less memory QLoRA & LoRA finetuning: 2-5X faster 70% less memory QLoRA & LoRA finetuning - unslothai/unsloth
- GitHub - vllm-project/vllm: A high-throughput and memory-efficient inference and serving engine for LLMs: A high-throughput and memory-efficient inference and serving engine for LLMs - vllm-project/vllm
- Does DPOTrainer loss mask the prompts? Ā· Issue #1041 Ā· huggingface/trl: Hi quick question, so DataCollatorForCompletionOnlyLM will train only on the responses by loss masking the prompts. Does it work this way with DPOTrainer (DPODataCollatorWithPadding) as well? Looki...
- Supervised Fine-tuning Trainer: no description found
- Trainer: no description found
- Reproducing of Lora Model Result on MT-Bench Ā· Issue #45 Ā· huggingface/alignment-handbook: Recently, I attempted to fit the DPO on my own dataset. Initially, I tried to reproduce the results of your LORA model( 7.43 on MT-Bench). However, I encountered some issues. Despite using all your...
- llama.cpp/examples/server/README.md at master Ā· ggerganov/llama.cpp: LLM inference in C/C++. Contribute to ggerganov/llama.cpp development by creating an account on GitHub.
- GitHub - abetlen/llama-cpp-python: Python bindings for llama.cpp: Python bindings for llama.cpp. Contribute to abetlen/llama-cpp-python development by creating an account on GitHub.
- HuggingFaceH4/zephyr-7b-alpha Ā· Add chat template: no description found
- HuggingFaceH4/zephyr-7b-alpha Ā· Hugging Face: no description found
- unsloth/unsloth/chat_templates.py at main Ā· unslothai/unsloth: 2-5X faster 70% less memory QLoRA & LoRA finetuning - unslothai/unsloth
Unsloth AI (Daniel Han) ā· #suggestions (21 messagesš„):
- Quick Reads on AI: A paper caught membersā attention and was acknowledged as āAnother amazing paperā which would be added to a reading list.
- Fine-tuning Epochs Debate: Yahir9023 and Flail_ discussed optimal epochs for model training; while Yahir9023 considered more epochs for voice neural networks, Flail_ suggested that generally for LLMs, ā3 epochs is the standard.ā
- Learning and Style in LLMs: Flail_ mentioned that fine-tuning LLMs tends to affect their style more than their knowledge, and cautioned that more epochs could cause the LLM to overfit to specific data.
- Sizing Up for Knowledge: Flail_ advised that the number of trainable parameters should match the amount of data, recommending a rank of 32 or 64 for 800,000 lines.
- Slim AI Models on Showcase: Yahir9023 shared links to smaller models, like Tiny Mistral, that could be integrated into the Unsloth Repo, but faced mixed reactions from the community.
Links mentioned:
- Dans-DiscountModels/TinyMistral-v2.5-MiniPile-Guidelines-E1 Ā· Hugging Face: no description found
- M4-ai/TinyMistral-6x248M-Instruct at main: no description found
LM Studio ā· #š¬-general (301 messagesš„š„):
-
Exploring Model Performance on Local Hardware: Members discussed which models to run on local machines with given hardware specs, like an M3 Pro with 18GB memory. Various model suggestions included CodeLlama, DeepSeek for coding, and TinyDolphin and WizardVicuna for other tasks.
-
Troubleshooting and Tips for Using LM Studio: Some users faced validating file integrity loops and issues with multiple GPU setups, while others shared tips, like using a local IP address instead of localhost when using Docker to deploy a web UI.
-
Inquiry about Local Model Functionality: One member asked if LM Studio could support features like face-swapping videos or reading files from a folder. The consensus was that these features are not currently supported within LM Studio, and alternatives were suggested.
-
Discussions on Model Capabilities and Hardware Requirements: Users engaged in extensive discussion about the capabilities of different graphics cards, with Tesla K40 and K80 as noteworthy mentions. Queries on running larger models like a 34GB LLM with different configurations of GPUs were addressed, highlighting the need for substantial VRAM.
-
Debates over the Utility of Open-Sourcing Large Models: The open-sourcing of a 316B parameter model called Grok sparked debate on its practicality for local use, given its enormous size. Some users expressed skepticism about the ability to run such large models on typical hardware setups.
Links mentioned:
- grok-1: Grok-1 is a 314B parameter Mixture of Experts model - Base model (not finetuned) - 8 experts (2 active) - 86B active parameters - Apache 2.0 license - Code: - Happy coding! p.s. we re hiring:
- Ratha GIF - Ratha - Discover & Share GIFs: Click to view the GIF
- xai-org/grok-1 Ā· 314B params has 297G file size ?: no description found
- Issues Ā· continuedev/continue: ā© The easiest way to code with any LLMāContinue is an open-source autopilot for VS Code and JetBrains - Issues Ā· continuedev/continue
- [1hr Talk] Intro to Large Language Models: This is a 1 hour general-audience introduction to Large Language Models: the core technical component behind systems like ChatGPT, Claude, and Bard. What the...
- Mistral: Easiest Way to Fine-Tune on Custom Data: This video is sponsored by Gradient.ai, check them out here: https://gradient.1stcollab.com/engineerpromptIn this video, we will learn how to fine-tune Mistr...
LM Studio ā· #š¤-models-discussion-chat (138 messagesš„š„):
<ul>
<li><strong>Command-R Model Awaits Merge for LM Studio:</strong> A member mentioned that support for the Command-R 35B model in LM Studio is imminent, pending the merge of <a href="https://github.com/ggerganov/llama.cpp/pull/6033">llama.cpp pull request #6033</a> and an update to LM Studio. Once merged, CohereAI/Command-R should work.</li>
<li><strong>Seeking Model Recommendations for Local Use:</strong> Various members discussed searching for appropriate models to run locally based on individual system constraints. A productive source for these inquiries includes visiting <a href="https://www.reddit.com/r/LocalLLaMA/">reddit's LocalLLaMA community</a> for insights.</li>
<li><strong>Yi-9B-200K is a New Base Model:</strong> Clarification was provided that Yi-9B-200K operates with a 200k context limit and stems from a new base model series, separate from the Llama models. An extensive resource with more information can be found on its <a href="https://huggingface.co/01-ai/Yi-9B-200K">Hugging Face model card</a>.</li>
<li><strong>Grok-1 Release Spurs Debate:</strong> Discussion about xAI's release of the Grok-1 model, a 314B parameter Mixture-of-Experts that's not fine-tuned for any specific task, reveals skepticism regarding its immediate utility without further tuning. Details about the Grok-1 base model release can be read on the <a href="https://x.ai/blog/grok-os">xAI blog</a> and the model's raw weights are shared under the Apache 2.0 license.</li>
<li><strong>Running LLMs with Limited Hardware:</strong> Members exchanged advice for operating language models with GPUs that have restricted capacity, such as the Nvidia 1660 Super with 6GB VRAM. One suggestion included running smaller models like Gemma 2b, often requiring operational compromises or hardware adjustments.</li>
</ul>
Links mentioned:
- Open Release of Grok-1: no description found
- MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training: In this work, we discuss building performant Multimodal Large Language Models (MLLMs). In particular, we study the importance of various architecture components and data choices. Through careful and c...
- 01-ai/Yi-34B Ā· Prompt template?: no description found
- Reddit - Dive into anything: no description found
- What are Parameters in Large Language Model?: What are the Parameters in the Large Language Model? 00:26 š” Parameters in large language models like GPT-3 are variables learned during training to minimiz...
- 01-ai/Yi-9B-200K Ā· Hugging Face: no description found
- [1hr Talk] Intro to Large Language Models: This is a 1 hour general-audience introduction to Large Language Models: the core technical component behind systems like ChatGPT, Claude, and Bard. What the...
- Add Command-R Model by acanis Ā· Pull Request #6033 Ā· ggerganov/llama.cpp: Information about the Command-R 35B model (128k context) can be found at: https://huggingface.co/CohereForAI/c4ai-command-r-v01 Based on the llama2 model with a few changes: New hyper parameter to...
LM Studio ā· #š§ -feedback (12 messagesš„):
-
Command-R 35B Compatibility Issue: A member was corrected about the compatibility of the Command-R 35B v1.0 model with llama.cpp, stating llama.cpp doesnāt support c4ai yet. Despite the confusion stemming from a file list reading āAdded llama.cpp GGUF for Cohereās Command-R model,ā the issue is unresolved.
-
Linux AMD Users Need Love Too: A member suggested that the Linux version download page should note that AMD users need OpenCL drivers to use the GPU with the program, indicating a small but helpful improvement for user guidance.
-
LM Studioās Plugin Support Question: There was a query about whether LM Studio allows users to chat with their own documents or add plugins like autogen. A member clarified that plugins are indeed supported and advised turning on server mode to connect in this way.
Link mentioned: andrewcanis/c4ai-command-r-v01-GGUF Ā· Hugging Face: no description found
LM Studio ā· #š-hardware-discussion (480 messagesš„š„š„):
-
GPU Dreams and Budget Realities: Members discussed the differences between running local Language Models (LMs) on various GPUs, including speculations about Tesla K80 performance vs. modern cards. While suggestions like second-hand 3090s were proposed, the allure of affordability steered some towards purchasing K80s despite potential limitations and cooling challenges.
-
Benchmarking Woes: The community talked about the difficulty in finding comprehensive LM benchmarking data across different hardware setups. Ideas were floated about having a centralized resource or leveraging LM Studio for standardized benchmarks, highlighting the wide-ranging variable performance results from othersā personal builds.
-
Epic Epyc Adventures: A detailed exploration into building a multi-GPU setup ensued, with members discussing the pros and cons of AMD Ryzen vs. Epyc CPUs for supporting multiple NVLinked GPUs. The high cost and performance payoff of moving to Epyc for better PCIe lane support was a central concern, indicating the serious investment required to run large models locally.
-
MacBooks in the Mix: In contrast to the custom rig discussions, some expressed satisfaction with using powerful MacBook models for running LMs. The MacBookās ease of use, portability and performance offered a simpler albeit more expensive alternative to custom builds.
-
Past Purchases and Future Plans: Members shared their plans to procure components like used Teslas from eBay, discussing the particulars of Epyc CPU installations and the need for a torque screwdriver. Amidst the technical talk, humorous anecdotes and camaraderie were shared about the highs and lows of assembling a dream LM-running machine.
Links mentioned:
- no title found: no description found
- š¾ LM Studio - Discover and run local LLMs: Find, download, and experiment with local LLMs
- LM Studio Beta Releases: no description found
- M.2 Accelerator with Dual Edge TPU | Coral: Integrate two Edge TPUs into legacy and new systems using an M.2 (E key) interface.
- no title found: no description found
- 404 page: no description found
- no title found: no description found
- Asrock Rack ROMED8-2T ATX Server Motherboard AMD EPYC 7003 (with AMD 3D V-Cache Technology)/7002 series processors SP3 (LGA 4094) Dual 10GbE - Newegg.com: Buy Asrock Rack ROMED8-2T Server Motherboard AMD EPYC 7003 (with AMD 3D V-Cache Technology)/7002 series processors SP3 (LGA 4094) Dual 10GbE with fast shipping and top-rated customer service. Once you...
- 141.5SG$ |Epyc 7282 16 Core 32Threads 16x2.8Ghz 120W Socket SP3 CPU 9 nanometers Epyc 7282| | - AliExpress: Smarter Shopping, Better Living! Aliexpress.com
- Dell T710 Tower Server Dual 6-CORE X5650 **144Gb RAM**240gb SSD +6X 600G SFF SAS | eBay: no description found
- Intel Core i5-3470 Specs: Ivy Bridge, 4 Cores, 4 Threads, 3.2 GHz, 77 W
- New /Wave ®AI Server NF5688M6 NVIDIA HGX TESLA A800 80G octet GPU server/Futures | eBay: no description found
- AMD EPYC 7232P CPU PROCESSOR 8 CORE 3.10GHz 32MB CACHE 120W - 100-000000081 | eBay: no description found
- AMD EPYC 7F72 CPU PROCESSOR 24 CORE 3.20GHz 192MB CACHE 240W - 100-000000141 | eBay: no description found
- Dell T710 Tower Server Dual 6-CORE X5670 **24 cores**64GB RAM | eBay: no description found
- Nvidia Tesla K80 24GB GPU GDDR5 PCI-E GPU Accelerator 12 Month warranty | eBay: no description found
- AMD EPYC 7232P 8-Core 3.1GHz 32MB L3 Processor - Socket SP3 - 100-000000081 | eBay: no description found
- Luckim Official Store - Amazing products with exclusive discounts on AliExpress: no description found
- Nvidia Tesla K80 24GB GPU GDDR5 PCI-E GPU Accelerator 12 Month warranty | eBay: no description found
- Search Thingiverse - Thingiverse: Download files and build them with your 3D printer, laser cutter, or CNC.
- Micro Center - Computers and Electronics: Micro Center - Computers and Electronics - Thousands of products to buy: desktops, laptops, monitors, build your own PC parts, upgrades, digital imaging, printing supplies, portable devices, audio equ...
LM Studio ā· #š§Ŗ-beta-releases-chat (4 messages):
-
Seeking Presets for Models: A user asked if thereās a list of presets available for different models. They were directed to a GitHub repository of LM Studio JSON configuration files, which includes a collection of example config files.
-
ROCm User Community Call: A user inquired about others using ROCm. Another user advised them to post their query in a specific channel for better visibility.
Link mentioned: GitHub - lmstudio-ai/configs: LM Studio JSON configuration file format and a collection of example config files.: LM Studio JSON configuration file format and a collection of example config files. - lmstudio-ai/configs
LM Studio ā· #langchain (1 messages):
- Inquiry about JSON Functionality: A member asked if anyone had successfully implemented a model with JSON function calling on the Local Inference Server. No further context or successful attempts were discussed.
LM Studio ā· #avx-beta (5 messages):
- AVX Beta Version Clarified: A member asked for clarification, wondering if the beta app using AVX is the same as the main application. It was confirmed that itās an older version, and AVX support is not seen as a high-priority feature.
- Model Compatibility in Beta: In response to an inquiry about model functionality, a member stated that while models will work in the beta, newer ones like starcoder2, gemma are not supported.
- Running Mistral on Beta: A clarification was made that despite the limitations, Mistral can be run on the AVX beta version of the app.
LM Studio ā· #amd-rocm-tech-preview (5 messages):
- Prebuilt Windows ROCm Libraries for AMD available: A link to GitHub was shared for prebuilt Windows ROCm libraries, specifically designed for gfx1031 and gfx1032 graphics cards. The library can be found at GitHub - brknsoul/ROCmLibs.
- Exploring Multiple GPU Support in LM Studio: A user expressed interest in using multiple AMD GPUs, particularly a 6700 xt in tandem with a 7800 xt, with LM Studio and inquired about future support for such configurations.
- Compatibility Issues with AMD 6700 xt and ROCm: A response clarified that the AMD 6700 xt is not officially supported by AMD for ROCm, which is why LM Studio, relying on unmodified ROCm libraries, does not support the 6700 xt.
- Potential for Multi-GPU utilization in LM Studio: Another user suggested that possessing another 7000 series GPU would likely allow LM Studio to use the GPUs in parallel, implying current multi-GPU support with certain hardware.
Link mentioned: GitHub - brknsoul/ROCmLibs: Prebuild Windows ROCM Libs for gfx1031 and gfx1032: Prebuild Windows ROCM Libs for gfx1031 and gfx1032 - brknsoul/ROCmLibs
LM Studio ā· #crew-ai (1 messages):
- Choosing the Right Agent System: A member inquired about the progress on selecting an agent system for deepening and validating a creative concept, expressing shared interest in this decision-making process. No specific systems or criteria were mentioned in their query.
Nous Research AI ā· #off-topic (56 messagesš„š„):
- NVIDIAās Next-Gen Memory Speed: NVIDIAās upcoming GeForce RTX 50-series āBlackwellā is rumored to feature GDDR7 memory at 28 Gbps, despite GDDR7ās capability for 32 Gbps. This choice follows a pattern seen with the RTX 20-series where they didnāt use the fastest available memory speed either.
- Discord Users Explore AI Agents and Interfaces: Thereās a rich conversation about the expectation of advanced AI models and interfaces, specifically discussing how OpenAI may be integrating custom AI models within agent interfacesāa strategy aimed at improving reliability and the overall functional robustness of AI systems.
- Future AI Predictions Resonate: Users reference Sam Altmanās predictions from 2021, noting the accuracy of his timeframes for AI technology progress. The conversation highlights that AI companions, initially anticipated for 2030+, are becoming a reality earlier than expected.
- Community Reacts to AI Industry Figures: Members express frustration with what they see as idle speculation and lack of action from high-profile AI industry figures. The chat reflects on the gap between the grand vision of AIās future and the practical issues needing attention in the present.
- Interactive AI Assistant Development Discussed: Participants share ideas and resources for programming an AI assistant that pauses intelligently when interrupted, with suggestions around editing conversation history and context manipulation leading the discussion. The conversation indicates a communal interest in refining the interactivity of AI systems.
Links mentioned:
- Tweet from undefined: no description found
- Plan-and-Execute using Langgraph: how to create a "plan-and-execute" style agent. This is heavily inspired by the Plan-and-Solve paper as well as the Baby-AGI project.The core idea is to firs...
- NVIDIA GeForce RTX 50-series "Blackwell" to use 28 Gbps GDDR7 Memory Speed: The first round of NVIDIA GeForce RTX 50-series "Blackwell" graphics cards that implement GDDR7 memory are rumored to come with a memory speed of 28 Gbps, according to kopite7kimi, a reliabl...
Nous Research AI ā· #interesting-links (16 messagesš„):
-
Horny Claudes Generate Better Mermaid Diagrams: A chatbot referred to as Repligate suggests that a network of āhorny claudesā seems to produce more effective mermaid diagrams. This peculiar tactic was shared alongside some explicit samples.
-
The Contagious Charm Offensive: A discussion arose around the concept of ārizzing upā models to receive better answers, with one comment suggesting applying charm to enhance performance.
-
Clone Wars: Members discuss the ethics and implications of replicating features from AI models, with one calling it āreverse Sydneyā in reference to cloning model behaviors.
-
ORPO for Language Model Preference Alignment: Huggingface published a paper on a model-free monolithic odds ratio preference optimization algorithm, ORPO. It proposes a way to fine-tune language models like Phi-2 and Llama-2 for more aligned outputs without an additional preference alignment phase.
-
Open Source Intelligence War Heats Up: A detailed overview was shared about Elon Muskās potential disruption of the AI landscape with Grok, a 314B parameter model. The discussion includes speculation about the performance of various models, including OpenAIās GPT series, and industry dynamics shaped by Muskās strategic maneuvers.
Links mentioned:
- Tweet from jā§nus (@repligate): @xlr8harder I didn't let it go very far but there's someone in the room with me right now talking about how theyve created a network of "horny claudes" and how the claudes create bette...
- Language Agents as Optimizable Graphs: Various human-designed prompt engineering techniques have been proposed to improve problem solvers based on Large Language Models (LLMs), yielding many disparate code bases. We unify these approaches ...
- Paper page - ORPO: Monolithic Preference Optimization without Reference Model: no description found
- Tweet from Burny ā Effective Omni (@burny_tech): My thoughts on Musk destabilizing other gigantic players in the intelligence wars by possibly leading open source using Grok Grok 1 is a 314B parameter model and it's a mixture of experts archit...
- GitHub - Oxen-AI/Self-Rewarding-Language-Models: This is work done by the Oxen.ai Community, trying to reproduce the Self-Rewarding Language Model paper from MetaAI.: This is work done by the Oxen.ai Community, trying to reproduce the Self-Rewarding Language Model paper from MetaAI. - Oxen-AI/Self-Rewarding-Language-Models
Nous Research AI ā· #general (656 messagesš„š„š„):
- Hermes 2.5 Still Reigns: In the Nous Research discussions, thereās acknowledgment that despite Grok-1ās massive 314-billion-parameter size, it barely outperforms GPT-3.5, while some 7-billion parameter models surpass GPT-3.5ās capabilities. Grok-1ās utility is questioned, with speculation over its necessity when LLaMA and smaller models serve well.
- Yi Licensing Clears Up: Discussions hint at Yi-9Bās license now allowing commercial use with what seems to be an automated approval process. However, thereās skepticism, with some thinking this auto-approval could just be a marketing tactic.
- Jensen Huangās Accidental Leak?: NVIDIA CEO Jensen Huangās GTC keynote possibly confirmed rumors about GPT-4ās architecture and scale, mentioning 1.8 trillion parameters and a Mixture-of-Experts (MoE) setup, which OpenAI has not officially confirmed.
- Continual Pretraining Debates: The channel has conversations on the feasibility, benefits, and difficulties of continually pretraining models, particularly MoE models like Mixtral. While deemed useful for low-resource languages, thereās concurrence that without a high-quality data set, this approach could degrade overall model performance.
- Open Source vs Commercial Giants: There is recognition that open-source efforts in large language models (LLMs) may be stifled if well-funded labs continue to keep their models and methods restricted. Some envision a future where each state or large entity maintains their own LLM tailored to their constitutional values, with open-source models serving as valuable bases for continual development.
Links mentioned:
- Open Release of Grok-1: no description found
- datas (shu nakamura): no description found
- Tweet from interstellarninja (@intrstllrninja): <cmd> run world_sim.exe --epoch "Earth in 2500" --civilization_type "Type-II on Kardashev scale" </cmd> āļø Quoting mephisto (@karan4d) im opensourcing worldsim of course...
- Tweet from Parzival - š/ā« (@whyarethis): Now we are going somewhere.
- Do Llamas Work in English? On the Latent Language of Multilingual Transformers: We ask whether multilingual language models trained on unbalanced, English-dominated corpora use English as an internal pivot language -- a question of key importance for understanding how language mo...
- anon8231489123/ShareGPT_Vicuna_unfiltered Ā· Datasets at Hugging Face: no description found
- Tweet from Aravind Srinivas (@AravSrinivas): Yep, thanks to @elonmusk and xAI team for open-sourcing the base model for Grok. We will fine-tune it for conversational search and optimize the inference, and bring it up for all Pro users! āļø Quoti...
- Tweet from interstellarninja (@intrstllrninja): @Cyndesama claude 3 opus runs ai town simulation with python42
- Replete-AI/Mistral-Evolved-11b-v0.1 Ā· Hugging Face: no description found
- Tweet from Lin Qiao (@lqiao): We are thrilled to collaborate on Hermes 2 Pro multi-turn chat and function calling model with @NousResearch. Finetuned on over 15k function calls, and a 500 example function calling DPO datasets, Her...
- MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training: In this work, we discuss building performant Multimodal Large Language Models (MLLMs). In particular, we study the importance of various architecture components and data choices. Through careful and c...
- The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits: Recent research, such as BitNet, is paving the way for a new era of 1-bit Large Language Models (LLMs). In this work, we introduce a 1-bit LLM variant, namely BitNet b1.58, in which every single param...
- NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO Ā· Adding Evaluation Results: no description found
- Tweet from interstellarninja (@intrstllrninja): Hermes 2 Pro function-calling model integrated with search engine by @ExaAILabsš āļø Quoting Barton Rhodes 𦺠(@bmorphism) added @ExaAILabs support for use with @NousResearch new function-calling m...
- Tweet from interstellarninja (@intrstllrninja): <cmd> sudo python3 akashic_records.py --entity ["sam altman", "elon musk"] --mode "email thread" --topic "superintelligence scenarios" </cmd>
- Sparse Distributed Memory is a Continual Learner: Continual learning is a problem for artificial neural networks that their biological counterparts are adept at solving. Building on work using Sparse Distributed Memory (SDM) to connect a core neural ...
- migtissera/Tess-70B-v1.6 Ā· Hugging Face: no description found
- Simple and Scalable Strategies to Continually Pre-train Large Language Models: Large language models (LLMs) are routinely pre-trained on billions of tokens, only to start the process over again once new data becomes available. A much more efficient solution is to continually pre...
- openchat/openchat_sharegpt4_dataset at main: no description found
- Tweet from Andrew Kean Gao (@itsandrewgao): i think grok-4bit is just barely too big for an H100 GPU :( āļø Quoting Andrew Kean Gao (@itsandrewgao) HOLY SH*T @grok IS 314 BILLION PARAMETERS Mixture of 8 Experts, not RLHFd/moralized THIS IS ...
- Abstractions/abstractions/goap/causality.ipynb at main Ā· furlat/Abstractions: A Collection of Pydantic Models to Abstract IRL. Contribute to furlat/Abstractions development by creating an account on GitHub.
- Language models scale reliably with over-training and on downstream tasks: Scaling laws are useful guides for developing language models, but there are still gaps between current scaling studies and how language models are ultimately trained and evaluated. For instance, scal...
- 01-ai/Yi-9B-200K Ā· Hugging Face: no description found
- JSON Schema - Pydantic: no description found
- Abstractions/abstractions/goap/system_prompt.md at main Ā· furlat/Abstractions: A Collection of Pydantic Models to Abstract IRL. Contribute to furlat/Abstractions development by creating an account on GitHub.
- Abstractions/abstractions/goap/gridmap.ipynb at main Ā· furlat/Abstractions: A Collection of Pydantic Models to Abstract IRL. Contribute to furlat/Abstractions development by creating an account on GitHub.
- 01-ai/Yi-9B Ā· Hugging Face: no description found
- Tweet from Andriy Burkov (@burkov): We are yet to see how good Grok is compared to GPT-4, but what we can tell for sure is that if you are to train a competitor to OpenAI/Anthropic today, you would not need to start from scratch anymore...
- HD/VSA:
- Liam Johnson DESTROYS Heckler | New York Stand-up: Last weekend Liam Johnson decided to finally make his first appearance here at Giggle Nerd. He performed on Sunday from 23:00 to 23:25 and our audience loved...
- Accelerationism Accelerationism (Acc/Acc): Accelerationism accelerationism is when you accelerate accelerationism to apply accelerationism to accelerationismparts that were too edgy: https://www.patre...
- Cosma Shalizi - Why Economics Needs Data Mining: Cosma Shalizi urges economists to stop doing what they are doing: Fitting large complex models to a small set of highly correlated time series data. Once you...
- Let's build the GPT Tokenizer: The Tokenizer is a necessary and pervasive component of Large Language Models (LLMs), where it translates between strings and tokens (text chunks). Tokenizer...
- GTC March 2024 Keynote with NVIDIA CEO Jensen Huang: Watch NVIDIA CEO Jensen Huangās GTC keynote to catch all the announcements on AI advances that are shaping our future.Dive into the announcements and discove...
- Tweet from Grok (@grok): @elonmusk @xai āWāEāIāGāHāTāSāIāNāBāIāOā
- GitHub - PrismarineJS/mineflayer: Create Minecraft bots with a powerful, stable, and high level JavaScript API.: Create Minecraft bots with a powerful, stable, and high level JavaScript API. - PrismarineJS/mineflayer
- Prismarin - Overview: Prismarin has 3 repositories available. Follow their code on GitHub.
- HacksTokyo: AI x Digital Entertainment Hackathon in Tokyo!
- Whole-body simulation of realistic fruit fly locomotion with deep reinforcement learning: The body of an animal determines how the nervous system produces behavior. Therefore, detailed modeling of the neural control of sensorimotor behavior requires a detailed model of the body. Here we co...
Nous Research AI ā· #ask-about-llms (25 messagesš„):
-
Perplexing Perplexity Issue: A member is struggling with calculating the perplexity for Llama2 and gets a perplexity (ppl) of 90.3 using a Kaggle notebook based on the HF guide. They are looking for insights from the community to solve this issue.
-
High Aspirations, High Costs: Aspiring to see a 20b base model surpass Mistral, a member explores the idea and is sobered by the mention of a potential $300,000 cost for such an endeavor.
-
Scaling Dreams Dashed: Practical discussions arise about upscaling models, specifically llama-2 13b to 20b. Members express skepticism regarding the effectiveness of upscaled models, and the costs associated with continued pretraining are estimated.
-
Exploring Model Merging: The conversation touches on the potential in surveying model merging techniques in the quest for achieving significant results with a 20b-sized AI model.
-
Pruning Away: A member shares their work on downscaling models with continuous pretraining and provides a detailed comparison between the Mistral-7B-v0.1 and their pruned version dubbed Smallstral.
Links mentioned:
- Calculating the Perplexity of 4-bit Llama 2: Explore and run machine learning code with Kaggle Notebooks | Using data from multiple data sources
- GitHub - xai-org/grok-1: Grok open release: Grok open release. Contribute to xai-org/grok-1 development by creating an account on GitHub.
- AlexWortega/smallstral Ā· Hugging Face: no description found
- alexwortega: Weights & Biases, developer tools for machine learning
Nous Research AI ā· #bittensor-finetune-subnet (18 messagesš„):
- Link Confusion Averted: A user inquired about a potentially broken link, but was quickly reassured by another member that the link is indeed functional.
- Awe and Respect for an Idea: Member fullstack6209 expressed long-standing awe and admiration for an unspecified idea.
- Bittensor Chain Troubles: Users discussed recent issues with the bittensor chain, noting that it was ābrokedā and fixes were not promptly implemented.
- Subtensor Update Required: It was mentioned that the bittensor chain is back up, but requires an update to subtensor that not all users had completed.
- Getting Tao for Bittensor Registration: A user sought advice on where to acquire tao for registration, with the suggestion to use USDT at MEXC exchange. Follow-up discussion included the challenges of using Kucoin due to withdrawal issues and the hardware requirements for starting with a 3090 graphics card.
Nous Research AI ā· #rag-dataset (100 messagesš„š„):
-
Debating the Future of RAG: Members discussed evolving requirements for RAG (Retrieval-Augmented Generation) models, centering on desirable features such as low-latency responses, context understanding, and knowledge diversity. One member provided a detailed wish list for RAG-model properties, encompassing markdown-like outputs and advanced reasoning capabilities.
-
Striking Balance in Model Responses: The conversation shifted to the dilemma of whether models should strictly use external context or also interpolate with their own knowledge. One opinion suggested that models should have a default mode of relying on their knowledge, with the flexibility to operate in a āRAG modeā only using external contexts upon command.
-
Model Outputs Under Scrutiny: Members critique the need for models to have output requirements such as markdown formatting. It was argued that while structured outputs, like in-line citations, are useful, flexibility in response styles should be preserved.
-
Incorporating External Context Expertise: The group discussed the potential of Cohereās model to improve RAG functionalities by providing span highlighting and citations, although it was acknowledged that many current models, like GPT-4, already possess almost perfect recall over long contexts.
-
RAG-enhanced Models on the Horizon?: Users explored the idea of creating specialized, smaller models to function within RAG pipelines, such as a ārelevant info extractor,ā to handle complex information retrieval tasks more efficiently. These specialized models could serve as intermediaries in larger computational frameworks.
Link mentioned: scratchTHOUGHTS/commanDUH.py at main Ā· EveryOneIsGross/scratchTHOUGHTS: 2nd brain scratchmemory to avoid overrun errors with self. - EveryOneIsGross/scratchTHOUGHTS
Eleuther ā· #general (273 messagesš„š„):
-
Surge in Open Course Access: Members expressed astonishment and appreciation for the availability of Ivy League courses online for free, noting that even non-students often watch lectures from prestigious institutions like MIT. In a shared enthusiasm, the ease of access to high-quality educational materials like Stanfordās CS videos on NLP and platforms hosting detailed course content was praised.
-
Exploring Unique Frameworks and AI Enhancements: Discussions arose about intriguing projects like the Devin AI software engineer, which boasts capabilities like browsing documentation and debugging code, and Figure 01, a robot developed by Figure with OpenAI showing promising results in natural language interaction. Meanwhile, a member noted the potential implementation of an AI ThoughtStream concept for models, drawing comparisons to Self-Taught Reasoner (STaR) as seen in this related paper.
-
Grokās Release Generates Buzz: Discussion about Grok-1 with its release on GitHub was prevalent, with members trying to understand its position compared to existing large language models like Claude and GPT-4. Uncertainty about the specific tuning and comparisons to the modelās performance on benchmarks led to a debate around its pretraining process and size.
-
The Challenge of Evaluating AI Models: Members debated evaluation metrics for AI models, with a focus on EleutherAIās cookbook for calculating TFLOPs and a suggestion to utilize newer sources of written English to avoid legal issues and maintain an up-to-date standard for benchmarking base models.
-
Spotlight on Conferences and Training Times: Queries were raised about affordable conferences for AI paper submissions, with TMLR being recommended as a free option. Simultaneously, a discussion on training times for large models like Pythia-70M on different GPUs used NVIDIAās performance figures combined with a research paper to estimate hypothetical durations.
Links mentioned:
- Tweet from Maisa (@maisaAI_): Introducing Maisa KPU: The next leap in AI reasoning capabilities. The Knowledge Processing Unit is a Reasoning System for LLMs that leverages all their reasoning power and overcomes their intrinsic ...
- KPU - Maisa: AI-Powered Knowledge Processing Platform. A simple API for executing business tasks. Abstracting the complexities of using the latest AI architectures for software and app developers
- Wikipedia:Database reports/Most edited articles last month - Wikipedia: no description found
- Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking: When writing and talking, people sometimes pause to think. Although reasoning-focused works have often framed reasoning as a method of answering questions or completing agentic tasks, reasoning is imp...
- Announcing Grok: no description found
- Excited Fuego GIF - Excited Fuego - Discover & Share GIFs: Click to view the GIF
- Block-Recurrent Transformers: We introduce the Block-Recurrent Transformer, which applies a transformer layer in a recurrent fashion along a sequence, and has linear complexity with respect to sequence length. Our recurrent cell o...
- Addressing Some Limitations of Transformers with Feedback Memory: Transformers have been successfully applied to sequential, auto-regressive tasks despite being feedforward networks. Unlike recurrent neural networks, Transformers use attention to capture temporal re...
- Optimizing Distributed Training on Frontier for Large Language Models: Large language models (LLMs) have demonstrated remarkable success as foundational models, benefiting various downstream applications through fine-tuning. Recent studies on loss scaling have demonstrat...
- GitHub - trevorpogue/algebraic-nnhw: AI acceleration using matrix multiplication with half the multiplications: AI acceleration using matrix multiplication with half the multiplications - trevorpogue/algebraic-nnhw
- Figure Status Update - OpenAI Speech-to-Speech Reasoning: no description found
- cookbook/calc/calc_transformer_flops.py at main Ā· EleutherAI/cookbook: Deep learning for dummies. All the practical details and useful utilities that go into working with real models. - EleutherAI/cookbook
- Free Transcripts now Available on NPR.org: Transcripts of favorite, missed or maddening stories on NPR used to cost $3.95 each, but now they are free on NPR.org.
- Issues Ā· pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration - Issues Ā· pytorch/pytorch
- GitHub - xai-org/grok-1: Grok open release: Grok open release. Contribute to xai-org/grok-1 development by creating an account on GitHub.
- AI Conference Deadlines: no description found
- RT-2: New model translates vision and language into action: Introducing Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for...
- David P. Woodruff: no description found
Eleuther ā· #research (245 messagesš„š„):
-
Confusion Over Baseline Performance Discussed: Comparisons are made showing discrepancies between reported and expected performance values on benchmarks like GSM8k and CQA, indicating a lack of clarity on how baselines are evaluated. Probing questions highlight the need for optimal parameters and sampling methods during evaluation.
-
Speculation on Grokās Training and Usage: Thereās debate on the quality and training strategy behind the Grok model, with musings that Grokās development was driven by Elon Muskās directives and optimized for Twitterās internal hardware specs. Analysts await independent finetuning and benchmarks for a clear assessment of its capabilities.
-
Efficiency of Speculative Sampling Questioned: A discussion on speculative sampling and its application to different model architectures such as Mamba and RNNs suggests that while it may save on memory bandwidth, the practicality and efficiency for certain architectures still raise questions.
-
Label Smoothing and Benchmarks Scrutinized: Queries about the current use of label smoothing in language model training lead into critical views on the reliability of benchmarks, emphasizing the need for better benchmarks and questioning the value of personnel decisions in model efficacy.
-
Scaling and Cost-Efficiency Considered for Large Models: Conversations about scaling LLMs focus on the cost of running models as substantial as Grok, drawing conjecture on the economic and technical decisions behind its development, and suggesting that alternative training and model choices might yield savings.
Links mentioned:
- Transformers Get Stable: An End-to-End Signal Propagation Theory for Language Models: In spite of their huge success, transformer models remain difficult to scale in depth. In this work, we develop a unified signal propagation theory and provide formulae that govern the moments of the ...
- Announcing Grok: no description found
- Construction of Arithmetic Teichmuller Spaces IV: Proof of the abc-conjecture: This is a continuation of my work on Arithmetic Teichmuller Spaces developed in the present series of papers. In this paper, I show that the Theory of Arithmetic Teichmuller Spaces leads, using Shinic...
- Logits of API-Protected LLMs Leak Proprietary Information: The commercialization of large language models (LLMs) has led to the common practice of high-level API-only access to proprietary models. In this work, we show that even with a conservative assumption...
- MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training: In this work, we discuss building performant Multimodal Large Language Models (MLLMs). In particular, we study the importance of various architecture components and data choices. Through careful and c...
- The pitfalls of next-token prediction: Can a mere next-token predictor faithfully model human intelligence? We crystallize this intuitive concern, which is fragmented in the literature. As a starting point, we argue that the two often-conf...
- Adding NVMe SSDs to Enable and Accelerate 100B Model Fine-tuning on a Single GPU: Recent advances in large language models have brought immense value to the world, with their superior capabilities stemming from the massive number of parameters they utilize. However, even the GPUs w...
- Common 7B Language Models Already Possess Strong Math Capabilities: Mathematical capabilities were previously believed to emerge in common language models only at a very large scale or require extensive math-related pre-training. This paper shows that the LLaMA-2 7B m...
- GiT: Towards Generalist Vision Transformer through Universal Language Interface: This paper proposes a simple, yet effective framework, called GiT, simultaneously applicable for various vision tasks only with a vanilla ViT. Motivated by the universality of the Multi-layer Transfor...
- Accelerating Generative AI with PyTorch II: GPT, Fast: This post is the second part of a multi-series blog focused on how to accelerate generative AI models with pure, native PyTorch. We are excited to share a breadth of newly released PyTorch performance...
- Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling: Large language models are trained on massive scrapes of the web, which are often unstructured, noisy, and poorly phrased. Current scaling laws show that learning from such data requires an abundance o...
- Tweet from Aaditya Singh (@Aaditya6284): We study the effect of this choice in GPT-3.5 and GPT-4 ā specifically, we look at the effect of tokenizing left-to-right (L2R) vs right-to-left (R2L), enforced by using delimiters such as commas. We ...
- RNNs are not Transformers (Yet): The Key Bottleneck on In-context Retrieval: This paper investigates the gap in representation powers of Recurrent Neural Networks (RNNs) and Transformers in the context of solving algorithmic problems. We focus on understanding whether RNNs, kn...
- Comparative Study of Large Language Model Architectures on Frontier: Large language models (LLMs) have garnered significant attention in both the AI community and beyond. Among these, the Generative Pre-trained Transformer (GPT) has emerged as the dominant architecture...
- Open Release of Grok-1: no description found
- GitHub - xai-org/grok-1: Grok open release: Grok open release. Contribute to xai-org/grok-1 development by creating an account on GitHub.
- Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews: We present an approach for estimating the fraction of text in a large corpus which is likely to be substantially modified or produced by a large language model (LLM). Our maximum likelihood model leve...
- Bytez: Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews: This study examines the use of large language models (LLMs), like ChatGPT, in scientific peer review. The authors developed a method to estimate the percentage of text in peer reviews that is generate...
- GitHub - enfiskutensykkel/ssd-gpu-dma: Build userspace NVMe drivers and storage applications with CUDA support: Build userspace NVMe drivers and storage applications with CUDA support - enfiskutensykkel/ssd-gpu-dma
- GitHub - bigscience-workshop/bloom-dechonk: A repo for running model shrinking experiments: A repo for running model shrinking experiments. Contribute to bigscience-workshop/bloom-dechonk development by creating an account on GitHub.
- Model & API Providers Analysis | Artificial Analysis: Comparison and analysis of AI models and API hosting providers. Independent benchmarks across key metrics including quality, price, performance and speed (throughput & latency).
Eleuther ā· #scaling-laws (11 messagesš„):
-
Language Model Sensitivity to Dataset Complexity: A member shared that language model scaling laws seem to be influenced by dataset complexity and syntactic properties. The use of gzip was highlighted as a good predictor for dataset-specific scaling behaviors.
-
Seeking Feedback on Scaling Laws Research: Preliminary results on the impact of data complexity on scaling laws have been shared, and comprehensive experiments are in progress. They are looking forward to using a specific package to derive more quantitative data and fitted scaling laws.
-
Clarification on Data Labels: A question arose regarding additional labels in some data visualization; they were clarified to be syntactic specifications of the Probabilistic Context-Free Grammar (PCFG) used to generate the dataset.
-
Concerns About Plot Readability: There was a discussion about the difficulty in interpreting a plot due to the scale being inappropriate for the smaller model, with acknowledgment and mention of alternative log scale plots.
-
Perplexity vs. Intrinsic Entropy Discourse: Members discussed how perplexity relates to the intrinsic entropy of datasets and how this impacts comparisons across datasets. It was suggested that an optimal range of lexical densities for data could result in more efficient pretraining practices.
Eleuther ā· #interpretability-general (13 messagesš„):
- Inquiring About Sampling Strings: One member asked if thereās a canonical way to sample strings from a pre-specified set of 1-gram, 2-gram, ā¦, n-gram statistics on a vocabulary. The discussion moved towards understanding the relation of n-gram statistics to sampling.
- Clarify the Hierarchical Nature of N-grams: Another member clarified that specifying the statistics for n-grams determines the statistics for all lower orders, pointing out the hierarchical nature of n-gram language models.
- The Autoregressive Approach to Sampling: It was explained that sampling from the specified n-gram distribution can be done autoregressively, which results in the maximum entropy distribution matching those statistics.
- Step-by-Step Sampling Technique Explained: The method involves first sampling from the unigram distribution for the first token, then sampling from the bigram conditional distribution for the second, and so on.
- Reference to N-gram Model Implementation: For practical reference, a GitHub link to an implementation of this sampling process was provided, alongside a brief visit to the Wikipedia page on n-gram language models.
Links mentioned:
- Word n-gram language model - Wikipedia: no description found
- features-across-time/scripts/generate_bigrams.py at main Ā· EleutherAI/features-across-time: Understanding how features learned by neural networks evolve throughout training - EleutherAI/features-across-time
Eleuther ā· #lm-thunderdome (31 messagesš„):
-
Newcomerās Inquiry into lm-eval-harness Integration: A member new to lm-eval-harness asked about integrating it with their LLM model, specifically looking for a demo code for llama on gaudi2 using megatron deepspeed. They also inquired about function inheritance and the fixed format of dictionaries passed in argument, noting a lack of direct command line overrides.
-
Misleading Model Behavior in lm-eval-harness: A user reported an issue where when specifying a model to use in lm-eval, it defaulted to gpt-2-small even when different models were indicated. Upon investigation, it was discovered that the presence of dual
model_args
in the command led to the harness defaulting to gpt-2-small. -
Inconsistencies in Reported Scores on the Open LLM Leaderboard: A member questioned the accuracy of the reported number on the open LLM leaderboard for llama2-70b MMLU score, which differed from their own experimental results. Clarification was offered that the discrepancy could be due to differing methods of averaging over MMLU subtasks.
-
Request for Appleās New Multimodal Model Integration: A member suggested integrating Appleās new MM1 multimodal model into the lm-evaluation-harness to appear on the Open LLM Leaderboard.
-
Evaluation Deadlocks and Workarounds in lm-evaluation-harness: Users discussed an issue on GitHub (#1485) concerning a deadlock in the lm-evaluation-harness. A workaround involving avoiding concurrent dataset access was mentioned, along with a possible connection to multiprocessing in the evaluation harnessā code.
-
Location of Downloaded Models in lm-harness: Users asked and shared information about the location of model downloads by lm-eval, which is typically in the Huggingface cache directory and controlled by environment variables such as HF_HOME, TRANSFORMERS_CACHE, and HF_DATASETS_CACHE.
-
Release of lm-eval v0.4.2 and Help with PR Reviews: It was announced that version 0.4.2 of lm-eval is now available on PyPI, with a call for more contributions and assistance in PR reviews. Users are encouraged to ping for pending PR reviews that need attention.
-
Discussion on Representing Translated Tasks in lm-eval-harness: A conversation took place regarding the incorporation of Multilingual evals translated via machine translation into lm-eval-harness. It was proposed that a possible method would be collecting such tasks under a directory and clearly tagging them as translations.
-
Explanation of likelihood vs. likelihood_rolling in lm_harness: A distinction was made between
likelihood
andlikelihood_rolling
within lm_harness.loglikelihood_rolling
is for unconditional loglikelihood computations, whileloglikelihood
is for conditional loglikelihood of a string given another string, and can be used for multiple-choice evaluations. -
Stride Method and Context Length Considerations in Perplexity Evaluation: It was noted that the lm-evaluation-harness calculates perplexity using non-overlapping sliding windows, and an enhancement to expose the option for a sliding window with a custom stride is planned. It was also mentioned that larger context lengths in models reduce the effect of the stride choice.
Links mentioned:
- Perplexity of fixed-length models: no description found
- GitHub: Letās build from here: GitHub is where over 100 million developers shape the future of software, together. Contribute to the open source community, manage your Git repositories, review code like a pro, track bugs and fea...
- lm-evaluation-harness/docs/model_guide.md at main Ā· EleutherAI/lm-evaluation-harness: A framework for few-shot evaluation of language models. - EleutherAI/lm-evaluation-harness
- `wmt14-en-fr` deadlock issue Ā· Issue #1485 Ā· EleutherAI/lm-evaluation-harness: While running evaluation on this task, during ter metric computation, the program gets stuck forever. The command: lm_eval --model hf --model_args pretrained=microsoft/phi-2,trust_remote_code=True ...
- Release v0.4.2 Ā· EleutherAI/lm-evaluation-harness: lm-eval v0.4.2 Release Notes We are releasing a new minor version of lm-eval for PyPI users! We've been very happy to see continued usage of the lm-evaluation-harness, including as a standard test...
- evaluate/metrics/perplexity/perplexity.py at 8dfe05784099fb9af55b8e77793205a3b7c86465 Ā· huggingface/evaluate: š¤ Evaluate: A library for easily evaluating machine learning models and datasets. - huggingface/evaluate
Eleuther ā· #gpt-neox-dev (3 messages):
-
Clarity on The Pileās Shuffling Status: A member inquired about the shuffling status of The Pile dataset. Another member clarified that the original files distributed were not shuffled, but the preprocessed and pretokenized data on HF is ready-to-use and was utilized by Pythia.
-
Shuffling Detail for Pile Components: It was added that individual components of The Pile are definitely not shuffled, with some organized chronologically. However, the original train/test/validation split is expected to be shuffled to ensure a good mix throughout the dataset.
OpenAI ā· #ai-discussions (193 messagesš„š„):
-
Understanding AIās āUnderstandingā of Language: Members discussed whether AI truly comprehends language or merely predicts the next word without understanding. The conversation covered emergent properties of AI, comparisons to human cognition, and speculation about AIās potential to develop sensory fidelity comparable to humans.
-
AI and the Concept of Consciousness: The debate continued with thoughts on consciousness, with some suggesting that much of human cognition is about reading and inducing emotions, a task that requires less effort when interacting with AI, as it doesnāt compete for social status or wealth.
-
DALL-E 3 Praised for Precision: A user praised DALL-E 3 for its precise response to detailed prompts and its ability to respect negative prompts, stating that OpenAI has done stellar work with the model.
-
Claude vs GPT-4 Performance: Users compared the newer Claude-3 with GPT-4, noting that Claude may be the better conversationalist, while GPT-4 still performs well in certain tasks like programming. There was a call for OpenAI to release newer models or options to revert to earlier GPT-4 versions.
-
Accessibility to Sora and AI Support Issues: A member inquired about access to Sora, and solbus provided a link to the current accessibility status. There were also grievances expressed about difficulties contacting OpenAI support, with advice offered on navigating the help site and raising tickets.
Link mentioned: Enterprise privacy: no description found
OpenAI ā· #gpt-4-discussions (34 messagesš„):
- Enquiring Minds Want to Know About GPT-5: Multiple members have asked about the release date of GPT-5, showing curiosity about the future of OpenAIās language models.
- Integration of Web Searching in GPT API: A member questions how to add web searching capabilities into the GPT API, similar to those in ChatGPT-4.
- Seeking Bot-Building Assistance on Mobile: A user expressed the desire to create an OpenAI chatbot on their phone and customize it, particularly for Discord, and sought help using BotGhost and OpenAI.
- Locating Elements in Playwright: A member discussed issues with ChatGPT Turbo 3.5, which was not adhering to their method preference for locating elements in Playwright tests.
- Technical Troubles and Reporting: Users have noted technical problems with GPT responses, including inability to prompt responses and perpetual loading. A member inquired about where to report a bug and reach the Chat GPT authors, with responses directing them toward specific channels and help.openai.com.
- Concerns on GPT Filter Sensitivity: Users urge for a revision of GPTās content filters, suggesting they are too restrictive and hinder creative writing by flagging content commonly seen in childrenās cartoons.
OpenAI ā· #prompt-engineering (79 messagesš„š„):
-
Prompt Engineering Challenges for Classification: A member is searching for a methodological way to test various prompt architectures for a classification use case. They are experimenting with the amount of contextual information within the prompt to improve recall and minimize false positives, considering the limits of the total context window suggested by another user.
-
Playwright Locator Issues with GPT-3.5 Turbo: There are discussions concerning GPT-3.5 Turboās failure to produce suitable Playwright test code, specifically it not using the correct locator syntax. Alternatives such as GPT-4 were suggested for improved output.
-
Frustrations with Refusals to Perform Tasks: Several members expressed frustration with the model refusing tasks, seeking advice on how to encounter fewer denials. Members shared techniques like meta-prompting to navigate around the AIās self-moderation that leads to task refusal, and requested examples where issues persist to offer concrete help.
-
Struggling with Changes in ChatGPTās Response Patterns: Users notice a change in how ChatGPTās algorithm handles tasks, with increasingly aggressive bias minimization leading to more frequent āI canāt do thatā responses. There were discussions about how the model may develop a refusal stance that persists through a conversation, impacting workflows.
-
Guidance on Utilizing GPT Web Search with Multiple Queries: A member sought advice on how to prompt GPT to use web search with multiple queries to access a wide array of sources and gather more comprehensive information. The conversation highlighted the difference between guiding the model to search for multiple sources versus the user directly controlling the specific queries being made.
OpenAI ā· #api-discussions (79 messagesš„š„):
-
Prompt Optimization Challenges: A member is experimenting with OpenAI for a classification use case, trying different prompt architectures to minimize false positives while maintaining recall. They seek a methodological way to test different architectures and are considering using a custom GPT.
-
Model Selection for Playwright Tests: A user struggles to get GPT-3.5 turbo to output executable code for Playwright tests and is inquiring if GPT-4 might be more adept with the latest libraries. Theyāve discussed attempting to correct code format across multiple API calls without success.
-
Dealing with AI Task Refusals: Members exchange frustrations and advice on how to handle the AIās refusal to complete certain tasks. Examples and strategies vary, from making edits and using meta-prompting to accepting the AIās limitations and starting new conversations.
-
Shifting AI Behavior Over Time: Thereās a discussion about noticeable changes in ChatGPTās behavior, particularly an increased tendency to refuse tasks that it previously would complete. Users are adapting with clarification prompts and the āFive Whyāsā technique to navigate the modelās evolving content policy.
-
Harnessing GPT for Multiple Web Searches: Members debate on how to best prompt GPT to use web search for gathering information using multiple queries for a more comprehensive search result. Despite some confusion, an example approach is shared to guide GPT for multiple sources by crafting detailed and directive prompts.
HuggingFace ā· #general (96 messagesš„š„):
-
Cross-Encoder Multi-GPU Tuning Inquiry: A member asked for guidance on fine-tuning a cross encoder model using multiple GPUs, wondering about specific parameter modifications required to accommodate the hardware change.
-
Aya Demo Enhanced: The Aya demo has been improved with a very high repetition penalty thanks to a community contribution. Thereās a call for contributors to implement a slider in the gradio interface.
-
NVIDIAās Powerful New GPUs: Discussions revolve around NVIDIAās new H100 GPU, its power consumption (up to 850W), and long lead times for purchase. Memorable points include the combined server CPU and GPU on the same board, direct chip liquid cooling, and the allocation of rack power for modern supercomputers.
-
Data Hosting Leaderboard on HuggingFace: A member shared a leaderboard they created, indicating the amount of data hosted on HuggingFace repositories, showcasing the platformās significant model and data hosting capacity.
-
Grok-1 Model Emerges from xAI: Grok-1, a 314 billion parameter Mixture-of-Experts model, is discussed after its release, with marvel over its size and the rapidity with which it was uploaded to HuggingFace by a user following its release under Apache 2.0 license.
Links mentioned:
- grok-1: Grok-1 is a 314B parameter Mixture of Experts model - Base model (not finetuned) - 8 experts (2 active) - 86B active parameters - Apache 2.0 license - Code: - Happy coding! p.s. we re hiring:
- Open Release of Grok-1: no description found
- Whisper Large V3 - a Hugging Face Space by ivrit-ai: no description found
- Tweet from Weyaxi (@Weyaxi): š¤Have you ever wondered how much data we host on @huggingface? Well, I did after seeing @TheBlokeAI's model count and 120B models just chilling on the platform š š So I scraped all repositor...
- Tweet from Linux Performance, Benchmarks & Open-Source News - Phoronix: no description found
- Tonic/Aya Ā· Set a repetition_penalty constant as 1.8: no description found
- GitHub - moritztng/fltr: Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.: Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B. - moritztng/fltr
- Video-LLaVA demo api not working with Gradio-Client Ā· Issue #7722 Ā· gradio-app/gradio: Describe the bug Im trying to use the python api for the Video-LLaVA model demo on hugging face spaces but I get an error: Traceback (most recent call last): File "/Users/kamakshiramamurthy/Deskt...
HuggingFace ā· #today-im-learning (12 messagesš„):
- Exploring Optimization Techniques: A member shared their findings on GridSearch, RandomSearch, and Bayesian Optimization, admitting confusion specifically about the application of Bayesian Optimization.
- Hugging Face 101 Required: A member is seeking assistance on how to use Hugging Face and requested a basic explanation of what it is.
- AI Duets Hit a Sour Note: A newcomer in AI music generation struggles with creating harmonious duets and seeks advice on how others make covers with multiple artists, such as duets involving Harry Styles and Taylor Swift, without the AI-generated voices sounding āstrangled.ā
- MLOps with Hugging Face & SageMaker Pipelines: User shares a link to a workshop notebook (Workshop 3: MLOps: End-to-End Hugging Face Transformers with the Hub & SageMaker Pipelines) explaining how to create an end-to-end MLOps pipeline with Hugging Face Inference DLCs and Amazon SageMaker.
- Troubleshooting Model Access: A member needs assistance accessing a Hugging Face Model (
Mistral-7B-Instruct-v0.2.GGUF
) after encountering a404
error, indicating that the specified repository is not found or accessible on the Hugging Face model hub.
Links mentioned:
- no title found: no description found
- MLOps: End-to-End Hugging Face Transformers with the Hub & SageMaker Pipelines: Learn how to build an End-to-End MLOps Pipeline for Hugging Face Transformers from training to production using Amazon SageMaker.
HuggingFace ā· #reading-group (12 messagesš„):
- Linguistic and ML Fusion Fascinates: Members discussed the impressive nature of combining linguistics and machine learning, particularly in models dealing with both English and Chinese, considered highly distinct languages.
- Delving into Medusaās Potential: The paper on Medusa was shared for its novel approach to LLM inference by using multiple decoding heads to predict several tokens in parallel, which could theoretically create more efficient models.
- Design Lessons in Multimodal LLMs: An abstract for a paper was posted, highlighting design insights for Multimodal Large Language Models, with key findings on architecture components and data choices essential for state-of-the-art results.
- The Intricacies of Dominant Languages in LLMs: A member speculated that heavy skewing toward English in corpora could inadvertently lead to models replicating English or European patterns in language and thought.
- The Penetration of LLMs in Peer Reviews: A study from arXiv was cited showing that between 6.5% and 16.9% of text in AI conference peer reviews might have been significantly modified by LLMs, revealing certain behavioral trends among reviewers.
Links mentioned:
- Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads: The inference process in Large Language Models (LLMs) is often limited due to the absence of parallelism in the auto-regressive decoding process, resulting in most operations being restricted by the m...
- Paper page - MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training: no description found
- Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews: We present an approach for estimating the fraction of text in a large corpus which is likely to be substantially modified or produced by a large language model (LLM). Our maximum likelihood model leve...
- Bytez: Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews: This study examines the use of large language models (LLMs), like ChatGPT, in scientific peer review. The authors developed a method to estimate the percentage of text in peer reviews that is generate...
HuggingFace ā· #NLP (18 messagesš„):
-
Seeking NL2SQL Pipeline Enhancement: A member is working on a NL2SQL pipeline using BAAI/llm-embedder for embeddings, TheBloke/nsql-llama-2-7B-GGUF for NL2SQL model, and FAISS for the vector store, asking for recommendations on improving the accuracy of the embedding model and NL2SQL model for their application.
-
Grace Hopper Superchip Discussion Initiated: An announcement about the NVIDIA Grace Hopper Superchip was made, highlighting its potential for high-performance computing (HPC), AI, and data center applications, but it was not followed up with further details or related discussion.
-
Guidance Requested on NLP Learning Resources:
- For those curious about how to start with NLP, HuggingFaceās NLP course and Stanfordās NLP textbook were recommended by members as useful learning resources.
- Additionally, the Stanford course CS224N was mentioned as a concise version of the recommended textbook.
-
Looking for a Conformer ASR Training Tutorial: A member asked for a tutorial on training a conformer model for automatic speech recognition (ASR), but no recommendations or links were provided in the conversation thread.
-
Inquiry for Deployable Free LLM API:
- A member inquired about any free LLM APIs suitable for deploying on production, and was directed to try ollama, but there was no follow-up confirming the suitability or providing additional options.
Link mentioned: Introduction - Hugging Face NLP Course: no description found
LlamaIndex ā· #blog (7 messages):
- Reimagining Retrieval Augmentation: A tweet introduced a novel approach to handle complex queries in a RAG pipeline, suggesting that documents should be treated as interactive tools rather than static text for more dynamic interactions. The concept aims to provide more intricate responses during retrievals and is discussed further here.
- Instrumentation Tools Level Up: LlamaIndex released version 0.10.20, featuring a new Instrumentation module designed to enhance observability within RAG pipelines. Users can check out examples of basic observability and API call observations in linked notebooks detailed in the tweet available here.
- Advanced QA with Search-in-the-Chain: The team highlighted a paper by Shicheng Xu et al. proposing āSearch-in-the-Chainā as an evolution to RAG for question-answering, where retrieval is interleaved with planning and can prompt replanning if necessary. More details on this innovative method can be found here.
- Navigate Job Market with RAG and LlamaParse: Kyosuke Moritaās blog post demonstrates how combining LlamaParse with LlamaIndex can create a RAG-based Job Assistant to match candidates with jobs using data extracted from CVs. The blog provides insights into text extraction and job matching solutions here.
- Announcing Observability Integration with @langfuse: LlamaIndex announced their partnership with @langfuse to provide a two-line solution for adding observability to RAG pipelines, combining features like tracing, prompt management, and evaluation. The integration is aimed at enriching RAG applications with more robust monitoring capabilities, and details can be accessed here.
Links mentioned:
- llama_index/docs/examples/instrumentation/basic_usage.ipynb at main Ā· run-llama/llama_index: LlamaIndex is a data framework for your LLM applications - run-llama/llama_index
- llama_index/docs/examples/instrumentation/observe_api_calls.ipynb at main Ā· run-llama/llama_index: LlamaIndex is a data framework for your LLM applications - run-llama/llama_index
LlamaIndex ā· #general (303 messagesš„š„):
-
Exploring Multi-Agent Chaining: A user discussed how to chain multiple OpenAI agents using LlamaIndex, referencing LlamaIndex documentation on deploying agents with tools. There was a mention of using
FunctionTool
andQueryEngineTool
in this context, although they encountered an error requiring further clarification. -
Xinference Support Inquiry: Two participants conversed about Xinference CPU support in LlamaIndex, sharing a local deployment guide and GitHub links. A member sought advice on installing in a cluster to enhance inference time, referencing Xinference integration with LlamaIndex.
-
Efficient Use of BM25 with LlamaIndex: A user sought assistance on how to employ a BM25-based model for embeddings within LlamaIndex, linking to relevant documentation examples. The conversation evolved into discussing appropriate settings for bm25 as a
HuggingFaceEmbedding
model. -
Seeking Access to LlamaCloud: An individual inquired about accessing LlamaCloud, receiving guidance to sign up at https://cloud.llamaindex.ai, where they can gain access with a simple registration process.
-
Discussion on Embedding Models and Compatibility: A member brought up ChatGPTās performances concerns with regards to embedding model changes over time and pondered whether using LlamaIndex to manually set embedding models could prevent such issues. They advised their client to re-upload files to possibly improve ChatGPTās performance consistency.
Links mentioned:
- no title found: no description found
- )">no title found: no description found
- Prompt Engineering Guide: A Comprehensive Overview of Prompt Engineering
- Defining and Customizing Documents - LlamaIndex š¦ v0.10.20.post1: no description found
- Prompt Engineering Guide: A Comprehensive Overview of Prompt Engineering
- CodeSplitter - LlamaIndex š¦ v0.10.20.post1: no description found
- Multitenancy with LlamaIndex - Qdrant: Qdrant is an Open-Source Vector Database and Vector Search Engine written in Rust. It provides fast and scalable vector similarity search service with convenient API.
- Structured Data Extraction - LlamaIndex š¦ v0.10.20.post1: no description found
- LlamaCloud: no description found
- Tools - LlamaIndex š¦ v0.10.20.post1: no description found
- ">no title found: no description found
- llama_index/docs/examples/vector_stores/Qdrant_using_qdrant_filters.ipynb at 5c53f41712785e5558156372bdc4f33a6326fa5f Ā· run-llama/llama_index: LlamaIndex is a data framework for your LLM applications - run-llama/llama_index
- hof/flow/chat/prompts/dm.cue at _dev Ā· hofstadter-io/hof: Framework that joins data models, schemas, code generation, and a task engine. Language and technology agnostic. - hofstadter-io/hof
- Image to Image Retrieval using CLIP embedding and image correlation reasoning using GPT4V - LlamaIndex š¦ v0.10.20.post1: no description found
- [Question]: custom llm but is blocked Ā· Issue #12034 Ā· run-llama/llama_index: Question Validation I have searched both the documentation and discord for an answer. Question the code is from typing import Optional, List, Mapping, Any from llama_index.core import SimpleDirecto...
- llama_index/llama-index-integrations/llms/llama-index-llms-ollama/llama_index/llms/ollama/base.py at main Ā· run-llama/llama_index: LlamaIndex is a data framework for your LLM applications - run-llama/llama_index
LlamaIndex ā· #ai-discussion (4 messages):
- RAG Tutorial with LlamaParse: A new YouTube video was shared, titled āRAG with LlamaParse, Qdrant and Groq | Step By Step,ā offering a walkthrough on creating an effective RAG using LlamaParse, Qdrant, and Groq.
- Seeking RAG Preparation Tips: One member inquired about the top 5 tips for preparing a document for RAG and how to automatically add metadata to Pinecone for optimal retrieval, but specifics were not discussed.
- AI with RAG Pipeline Insight: A
Medium
blog post titled āEmpowering Voices: AI Assistant with RAG Pipeline, Memory, and LlamaIndexā was highlighted, providing insight into creating an AI assistant with these technologies. - RAPTOR Pack Integration with HuggingFace Models: A user is attempting to modify a RAPTOR pack implementation to use HuggingFace models instead of OpenAIās but encounters errors, and has shared their modified code snippet from the
llama_index
GitHub repoās example. They are asking for help with the correct coding implementation.
Link mentioned: RAG with LlamaParse, Qdrant and Groq | Step By Step: In this video, I will show you how to create a effective RAG with LlamaParse, Qdrant and Groq. I will explain what LlamaParse is and briefly walk you throughā¦
Latent Space ā· #ai-general-chat (202 messagesš„š„):
- Deciphering Yannās Perspective: Discussions reveal that Yann LeCun may favor visual or planning models over language because he lacks an inner monologue, as discussed in various tweets including a tweet from @kzSlider with a link to an interview with someone who lacks an internal monologue.
- Views on Linguistic Reasoning: Members like mr.osophy argue that we overestimate our use of language for reasoning and that visual representations can be more directly mapped to actions, like learning a tennis serve by watching someone rather than translating the visual into language first.
- Scenarios Leading AI Offerings: Scene-scenarioās new UPSCALER tool might prompt Magnific to reconsider pricing, offering image quality enhancement to 10k resolutions, as announced by @emmanuel_2m in a tweet.
- GPT Models, Computing, and Frameworks Discussions: Points of interest include a comment on an extraordinary increase in compute leading to faster model creation by @granawkins, the offering of free GTC virtual session codes by @harpreetsahota, and an in-depth discussion about the scaling and tokenization of multilingual models driven by @picocreator.
- Grok-1 Enters the Chat: The open release of xAIās Grok-1, a 314 billion parameter Mixture-of-Experts model under the Apache 2.0 license, is highly anticipated but its effectiveness as compared to other models is questioned in a flurry of tweets and counter opinions. Details and reactions can be followed through Grok-1ās release page and the conversation thread starting with @gen_being_96ās mention of the model.
Links mentioned:
- Open Release of Grok-1: no description found
- Tweet from Burny ā Effective Omni (@burny_tech): New details about GPT-5 from Sam Altman Heās basically admitting that GPT-5 will be a massive upgrade from GPT-4, so we can expect a similar jump from 3 to 4. ""If you overlook the pace of imp...
- Tweet from KZ (@kzSlider): This makes so much sense. Yannās always been looking for models that reason visually or using planning rather than purely in language āļø Quoting Teknium (e/Ī») (@Teknium1) This explains why Yann is ...
- Tweet from Champagne Joshi (@JoshWalkos): This is a fascinating conversation with a girl who lacks an internal monologue. She articulates the experience quite well.
- Tweet from Open Interpreter (@OpenInterpreter): 100 years in the making. 100 hours to go.
- Tweet from xlr8harder (@xlr8harder): I think I speak for everyone here when I say: 314 billion parameters what the hell
- Tweet from Teortaxesā¶ļø (@teortaxesTex): @aidan_mclau 0) Rocket man bad 1) it's not much worse 2) As you can see it's a sparse-upcycled Grok-0. It's undercooked. In 2023, continual pretraining has been āsolved, and having validat...
- š¦ EagleX 1.7T : Soaring past LLaMA 7B 2T in both English and Multi-lang evals (RWKV-v5): A linear transformer has just cross the gold standard in transformer models, LLaMA 7B, with less tokens trained in both English and multi-lingual evals. A historical first.
- Tweet from swyx (@swyx): how is it possible to have a 2hr conversation with sama and get zero alpha but hey we talked about aliens again thats fun
- Tweet from Yao Fu (@Francis_YAO_): Frontier models all have at least 100k context length, Gemini 1.5 has even 1m context. What about research and open source? Introducing Long Context Data Engineering, a data driven method achieving ...
- Tweet from Emm (@emmanuel_2m): šØ Today, we're excited to launch the Scenario #UPSCALER! Elevate your AI creations up to 10k resolution. š Built for unmatched #CreativeControl & guided workflows. š° It starts at just $15/mo ...
- Tweet from Alex Volkov (Thursd/AI) (@altryne): Sora team showing up at Berkley to talk about SORA
- Tweet from Grantāļø (@granawkins): "Between Q1-24 and Q4-25, there will be a 14x increase in compute. Then, if you factor in algorithmic efficiency doubling every 9 months, the effective compute at the end of next year will be alm...
- Tweet from Teknium (e/Ī») (@Teknium1): This explains why Yann is so bearish on LLMs... š²
- Tweet from Yao Fu (@Francis_YAO_): Grok's MMLU is only on par with Mixtral, despite one order of magnitude larger. I believe it has great potential but not fully released, and good continue pretrain data may substantially lift the ...
- Tweet from Teknium (e/Ī») (@Teknium1): This explains why Yann is so bearish on LLMs... š²
- Tweet from AI Is Like Water: Generative AI is like water. The phrase was borne out of frustration, but it opens up a new world of AI playbooks.
- Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI | Lex Fridman Podcast #419: Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and many other state-of-the-art AI technologies. Please support this podcast by che...
- Data Engineering for Scaling Language Models to 128K Context: We study the continual pretraining recipe for scaling language models' context lengths to 128K, with a focus on data engineering. We hypothesize that long context modeling, in particular \textit{t...
- WATCH: Jensen Huang's Nvidia GTC Keynote - LIVE: Tune in at 1:00pm PT / 4:00pm ET when Nvidia CEO Jensen Huang kicks off its biannual GTC conference.Never miss a deal again! See CNETās browser extension š ...
- Beyond Transformers - Intro to RWKV Architecture & The World To... Eugene Cheah & Harrison Vanderbyl: Beyond Transformers - Intro to RWKV Architecture & The World Tokenizer - Eugene Cheah & Harrison Vanderbyl, Recursal AIWhats comes next after transformers?In...
- #51 FRANCOIS CHOLLET - Intelligence and Generalisation: In today's show we are joined by Francois Chollet, I have been inspired by Francois ever since I read his Deep Learning with Python book and started using th...
- GitHub - FranxYao/Long-Context-Data-Engineering: Implementation of paper Data Engineering for Scaling Language Models to 128K Context: Implementation of paper Data Engineering for Scaling Language Models to 128K Context - FranxYao/Long-Context-Data-Engineering
- Tweet from GitHub - FixTweet/FxTwitter: Fix broken Twitter/X embeds! Use multiple images, videos, polls, translations and more on Discord, Telegram and others: Fix broken Twitter/X embeds! Use multiple images, videos, polls, translations and more on Discord, Telegram and others - FixTweet/FxTwitter
- Tweet from Daniel Han (@danielhanchen): Had a look through @Grok's code: 1. Attention is scaled by 30/tanh(x/30) ?! 2. Approx GELU is used like Gemma 3. 4x Layernoms unlike 2x for Llama 4. RMS Layernorm downcasts at the end unlike Llama...
- Explaining the SDXL latent space: no description found
- Tweet from jā§nus (@repligate): this was the result of navigating to the ../../microsoft/bing/bing_chat directory in claude's backrooms, then letting claude use commands to look around on its own, then running: <cmd_soul>...
- [AINews] MM1: Apple's first Large Multimodal Model: AI News for 3/14/2024-3/15/2024. We checked 358 Twitters and 20 Discords (332 channels, and 2839 messages) for you. Estimated reading time saved (at 200wpm):...
- Bark - a suno Collection: no description found
- GTC 2024: #1 AI Conference: Register now. Streamed online. March 18-21, 2024.
- NVIDIA & Harpreet Sahota GTC 2024: no description found
- Do Llamas Work in English? On the Latent Language of Multilingual Transformers: We ask whether multilingual language models trained on unbalanced, English-dominated corpora use English as an internal pivot language -- a question of key importance for understanding how language mo...
- Bytez: Do Llamas Work in English? On the Latent Language of Multilingual Transformers: In this research study, scientists wanted to know if language models (that can generate text) use English as a "pivot" language internally, even when prompted in other languages. They found ...
- Multilingual - a stereoplegic Collection: no description found
Latent Space ā· #ai-announcements (2 messages):
-
Paper Club Session Alert: The community is currently diving into āA Comprehensive Summary Of Large Language Modelsā paper in the paper club session. All interested are welcome to join and participate in the discussion.
-
Community Shoutout: A memberās work received attention on Hacker News, as highlighted by a fellow community participant.
Link mentioned: no title found: no description found
Latent Space ā· #llm-paper-club-west (20 messagesš„):
- Attention Mechanism Fundamentals Clarified: One member questioned why attention mechanisms were developed, leading to another explaining that attention was created to address the limitation of fixed-length encoding vectors in traditional models, allowing the decoder to access all relevant parts of the input sequence.
- Understanding the Motivation Behind Using Attention: A member conveyed the lack of understanding intuition behind design choices in transformer-based models. Another participant responded, clarifying that attention allows decoders to focus on the most relevant parts of the input sequence.
- Parallelization Perks of the Transformer Architecture: In the context of parallelization, a member elucidated that the attention mechanism in transformers allows processing different tokens independently and concurrently, leading to more efficient computation and faster training.
- Acknowledging Efficient Computation in Transformers: The importance of scaled dot product operations was highlighted, which eliminates the need for āwaitingā seen in sequential calculations, thus enhancing the efficiency of models like GPT.
- Gratitude for Educational LLM Session: Participants thanked the session hosts, noting that their insights provided a deeper understanding of the lineage of large language models (LLMs) and helping to demystify the conceptual underpinnings of the technology.
Latent Space ā· #ai-in-action-club (36 messagesš„):
- Passive Participation: A member mentioned they are only passively participating due to being in an in-person meeting.
- Experience with Loading Screens: An individual reported having experiences akin to the RAG (Retrieval-Augmented Generation) when encountering loading screens.
- Retrieval-Augmented Generation Discussed: An article on advanced RAG was shared, explaining concepts from small to big retrieval.
- Exchanging Research Ideas: Participants discussed alternative methods to cosine similarity in vector space models, referencing contrastive embeddings and suggesting the use of LLM for vector comparisons.
- Shared Resource Spreadsheet for AI Topics: A Google spreadsheet containing topics, dates, facilitators, and resources for AI-related discussions was shared, highlighting the structured approach to group discussions.
Link mentioned: AI In Action: Weekly Jam Sessions: 2024 Topic,Date,Facilitator,Resources,@dropdown UI/UX patterns for GenAI,1/26/2024,nuvic,<a href=āhttps://maggieappleton.com/squish-structure">https://maggieappleton.com/squish-structā¦
LAION ā· #general (168 messagesš„š„):
-
Jupyter Notebooks Accessibility in Copilot: Microsoft Copilot Pro subscriptions provide free access to Jupyter Notebooks with libraries like simpy and matplotlib, much like the offering in ChatGPT Plus.
-
DALL-E 3 Dataset Update: The DALL-E 3 dataset was not removed from Hugging Face; it was relocated to a new location which can be found at Hugging Faceās dalle-3-dataset repository. Users can use the
load_dataset()
function with a specific revision for reproducibility as documented by Hugging Face. -
Xformers Discussions for PyTorch Compatibility: Members discussed solutions for integrating xformers with PyTorch, including the use of virtual environments, a specific installation order, and the PyTorch index URL which guarantees a compatible version of xformers.
-
Leveraging Metadata in Captioning Prompts: One member shared insights about using metadata hints in captioning prompts to improve AI-generated text accuracy. An example script utilizing cog captioning was mentioned, which could be accessed at EveryDream2trainerās caption_cog GitHub script.
-
Questions About Vast.AI and Security: Users discussed the operation and security aspects of running containers on Vast.AI, noting that it does not adhere to strict security standards like HIPAA. Vast.AI operates by providing SSH access to a container running on a host machine, but ultimately, the host has root access, so sensitive tasks should use major cloud providers instead.
Links mentioned:
- Why Chinese companies are flocking to Mexico: The country offers a back door to the United States
- Silicon Valley Yes GIF - Silicon Valley Yes Cheer - Discover & Share GIFs: Click to view the GIF
- Tweet from imgnAI (@imgn_ai): catgirls are at NVIDIA GTC ⨠meowing for your creative freedom š this is a message that needs to be heard š±š
- Load: no description found
- Load: no description found
- EveryDream2trainer/caption_cog.py at main Ā· victorchall/EveryDream2trainer: Contribute to victorchall/EveryDream2trainer development by creating an account on GitHub.
- GitHub - xai-org/grok-1: Grok open release: Grok open release. Contribute to xai-org/grok-1 development by creating an account on GitHub.
- Reddit - Dive into anything: no description found
- OpenDatasets/dalle-3-dataset Ā· Datasets at Hugging Face: no description found
LAION ā· #research (13 messagesš„):
- Misconception About Web UIs on Colab: A member expressed concern that web UIs are considered risky, but another clarified that you simply canāt use free Colab with them.
- Cross-Channel Confusion: There was a brief exchange where a participant mistakenly thought that a discussion about web UIs was related to cutting-edge research.
- Document Shared on Multimodal World Model: A link to a Google document describing a Generative Audio Video Text world model was posted but without further context or discussion.
- Exploration of Continuous Pretraining of LLMs: A research paper was shared, discussing efficient methods for continuous pretraining of Large Language Models (LLMs) with a focus on overcoming distribution shift challenges.
- Code Repository for Grok Open Release Introduced: A GitHub link to xai-org/grok-1 was shared, referencing an open release of Grok without additional comments or discussion.
- Rumors About Nvidiaās GPT-4 Architecture: Discussion revolved around the rumor that Nvidia confirmed GPT-4 has a Mixture of Experts (MoE) architecture with 1.8 trillion parameters, yet it was noted that this isnāt necessarily referring to GPT-4. An image from Twitter was mentioned to support the claim.
Links mentioned:
- MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training: In this work, we discuss building performant Multimodal Large Language Models (MLLMs). In particular, we study the importance of various architecture components and data choices. Through careful and c...
- Simple and Scalable Strategies to Continually Pre-train Large Language Models: Large language models (LLMs) are routinely pre-trained on billions of tokens, only to start the process over again once new data becomes available. A much more efficient solution is to continually pre...
- Generative Audio Video Text world model: no description found
- GitHub - xai-org/grok-1: Grok open release: Grok open release. Contribute to xai-org/grok-1 development by creating an account on GitHub.
OpenAccess AI Collective (axolotl) ā· #general (99 messagesš„š„):
-
Training Dilemmas with Llama Chat Model: Discussing the limitations of LoRa training on the llama chat model, members expressed concern that training with completion data might not impact the chat behavior significantly due to information loss when converting from raw to Q/A format. A conversion to a sharedGPT format using axolotl was proposed, with subsequent tokenization into llama-2 chat format.
-
Axolotl As A Training Tool: Axolotl, a tool used for fine-tuning models, was highlighted as a useful alternative to directly setting up transformers code, allowing users to pass a YAML file instead. There were questions about memory optimizations in Axolotl compared to LoRa, and if it wrapped around the HF (Hugging Face) training ecosystem for easy config.
-
Exploring Quantization Methods: A debate ensued over the most effective quantization method, with AQML mentioned as potentially superior to AWQ. The discussion included a GitHub repository for AQLM, highlighting its use for extreme model compression.
-
Anticipation for NVIDIAās Next-Gen RTX 5000 Series: There was excitement about Nvidiaās upcoming RTX 5000 series, based on leaks suggesting significant VRAM and bandwidth improvements. A TechPowerUp article detailed potential specifications, sparking a conversation on the implications for AI training.
-
Grok-1 Weights and Sequoia Framework Unveiled: Grok-1 model weights release was shared, leading to inquiries about its potential MoE setup and benchmarking needs versus models like Mixtral. The introduction of Sequoia, a speculative decoding framework to speed up LLM inference, was highlighted, with a link to Together AIās Sequoia blog post discussing its performance on consumer-grade GPUs.
Links mentioned:
- Tweet from Brivael (@BrivaelLp): Zuck just reacted to the release of Grok, and he is not really impressed. "314 billion parameter is too much. You need to have a bunch of H100, and I already buy them all" š¤£
- Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding: no description found
- Wizard Cat Magus Cat GIF - Wizard Cat Magus Cat Witch Cat - Discover & Share GIFs: Click to view the GIF
- GTC March 2024 Keynote with NVIDIA CEO Jensen Huang: Watch NVIDIA CEO Jensen Huangās GTC keynote to catch all the announcements on AI advances that are shaping our future.Dive into the announcements and discove...
- GitHub - Vahe1994/AQLM: Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.pdf: Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.pdf - Vahe1994/AQLM
- GitHub - xai-org/grok-1: Grok open release: Grok open release. Contribute to xai-org/grok-1 development by creating an account on GitHub.
- GeForce RTX 5000: Gerüchte zu Nvidias nƤchster Grafikkartengeneration: Nvidias nƤchste groĆe Gaming-GPU kƶnnte mehr und schnelleren Speicher bekommen ā zusammen mit mehr Shader-Kernen.
- NVIDIA GeForce RTX 50-series "Blackwell" to use 28 Gbps GDDR7 Memory Speed: The first round of NVIDIA GeForce RTX 50-series "Blackwell" graphics cards that implement GDDR7 memory are rumored to come with a memory speed of 28 Gbps, according to kopite7kimi, a reliabl...
- NVIDIA GeForce RTX 50-series "Blackwell" to use 28 Gbps GDDR7 Memory Speed: The first round of NVIDIA GeForce RTX 50-series "Blackwell" graphics cards that implement GDDR7 memory are rumored to come with a memory speed of 28 Gbps, according to kopite7kimi, a reliabl...
OpenAccess AI Collective (axolotl) ā· #axolotl-dev (24 messagesš„):
- ScatterMoE Optimizations Excite Developers: Discussion on a new ScatterMoE model optimization that significantly outperforms Huggingface implementation and MegaBlocks. The repo can be found on GitHub.
- In the Pipeline: Training with ScatterMoE: A member asked about the specific configuration for training with ScatterMoE, indicating they would appreciate guidance on running a test to check for normal loss behavior.
- Torch Upgrade Considered: Conversation about upgrading Axolotlās PyTorch version to at least 2.2 due to incompatibility with the new kernels on torch 2.0.1. A member confirms they are using torch 2.2.1.
- Mixed Feedback on Grok Weights: Thereās curiosity and skepticism about using Grok weights with Axolotl, members mentioned an attempt with 314B Grok weights, but noted the performance was not impressive considering the model size.
- Ambitions for qLoRA with Grok: Discussion about whether Axolotl qLoRA FSDP could handle the large Grok models, while another member points out that the released Grok may only be the int8 checkpoint, referencing a GitHub link.
Links mentioned:
- implement post training by ehartford Ā· Pull Request #1407 Ā· OpenAccess-AI-Collective/axolotl: Does this look right?
- GitHub - xai-org/grok-1: Grok open release: Grok open release. Contribute to xai-org/grok-1 development by creating an account on GitHub.
- implement post training by ehartford Ā· Pull Request #1407 Ā· OpenAccess-AI-Collective/axolotl: Does this look right?
- GitHub - OpenAccess-AI-Collective/axolotl at scatter_moe: Go ahead and axolotl questions. Contribute to OpenAccess-AI-Collective/axolotl development by creating an account on GitHub.
OpenAccess AI Collective (axolotl) ā· #general-help (35 messagesš„):
-
Troubleshooting Tokenizer Quirks: In addressing tokenizer issues, a participant is finetuning an instruct model for document summarization but encounters a problem where the model inconsistently generates the
<summary>
tag, sometimes with an unexpected space in front of it. It was suggested to look into potential causes such as whitespace characters. -
Local Dataset and Model Configuration Woes: A newcomer to LLM is learning how to point a fine-tuning configuration to local datasets and models rather than pulling from Hugging Face. The correct approach is to specify the full file path rather than a relative path, and make sure itās not formatted as a repo identifier that throws a
HFValidationError
. -
Dataset Dilemmas and JSON Structure: A member is experiencing āindex out of rangeā errors while trying out different data types for a conversational dataset. It is suggested to check for empty rows, double-check the readme for conversation options, and ensure that āroleā in the dataset array isnāt empty.
-
Bug Hunting: Evaluating Role Mapping and Sample Packing: One user reported that a role map was missing ābard,ā ābing,ā and āchatgptā in their test dataset. Another discovered an issue where an eval set was considered too small for sample packing during a 2-epoch run but not during a 10-epoch run; a potentially epoch-independent validation bug.
-
Building Custom Completion Datasets: For building a custom completion dataset, members are advised itās as simple as making a
jsonl
file with each line containing a JSON object with the text attribute containing the completion text. The README file contains instructions on how this can be accomplished.
OpenAccess AI Collective (axolotl) ā· #datasets (8 messagesš„):
- NVIDIA NeMo-Curator Introduced: A member linked to NVIDIAās NeMo-Curator, a scalable toolkit for data curation on GitHub, asking if anyone has tried it.
- Search for Hybrid Mistral Model: A participant inquired about a Mistral FT specifically trained on both the orca-math-word-problems-200k dataset and the nvidia/OpenMathInstruct-1 dataset.
- Lack of a Combined Mistral Model Noted: One member stated that they donāt believe a combined model exists, mentioning the size of NVIDIAās dataset as a possible reason.
- Assessing the Potential of Model Merging: A conversation took place around mergekit and the possibility of using model merging as an alternative to training on orca-math-word-problems-200k dataset.
- Strategy for Merging Models Explored: In discussing the use of mergekit, it was advised to ensure that models to be merged are trained with the same chat format, opening a conversation about further fine-tuning for compatibility.
Link mentioned: GitHub - NVIDIA/NeMo-Curator: Scalable toolkit for data curation: Scalable toolkit for data curation. Contribute to NVIDIA/NeMo-Curator development by creating an account on GitHub.
OpenAccess AI Collective (axolotl) ā· #rlhf (1 messages):
duh_kola: Is it possible to use different lora adapter to do dpo on another model
CUDA MODE ā· #general (43 messagesš„):
-
Photonics Chip Insights Shared: A video was shared discussing the breakthrough in photonics chips that are claimed to be 1000 times faster, with the source from an Anastasiaās tech update video on YouTube, and related information from Lightmatter, a photonic supercomputer company. The video titled āNew Chip Breakthrough: x1000 fasterā can be found at Photonics Chips YouTube Intro.
-
Alternative GPU Profiling Services Discussion: Users discussed cloud GPU services for profiling CUDA kernels. RunPod.io and LambdaLabs were suggested, but a user experienced permission issues with NVIDIA GPU Performance Counters on RunPod, planning to report back on LambdaLabsā service.
-
Model Orchestration Query: Flyteās popularity for ML pipeline orchestration was queried among the predominantly machine learning-focused group, but no further discussion or answers were provided.
-
PyTorch Tensor Management Explained: The ethos behind PyTorchās explicit tensor device management was discussed, highlighting the philosophy of no unexpected behavior or magic in terms of device handling in contrast to TensorFlowās implicit device placement and its costs.
-
Questions on LLM Access and Introductions: Inquiries were made regarding free LLM APIs, easy to deploy open source models, and the proper channels for introductions and long questions within the community. No particular recommendations or orientation guidelines were provided.
Links mentioned:
- Rent Cloud GPUs from $0.2/hour: no description found
- Product - Chip - Cerebras: no description found
- torch.set_default_device ā PyTorch 2.2 documentation: no description found
- GPU Cloud, Clusters, Servers, Workstations | Lambda: GPU Cloud, GPU Workstations, GPU Servers, and GPU Laptops for Deep Learning & AI. RTX 4090, RTX 3090, RTX 3080, RTX A6000, H100, and A100 Options. Ubuntu, TensorFlow, and PyTorch Pre-Installed.
- New Chip Breakthrough: x1000 faster: Get TypeAI PREMIUM now! Start your FREE trial by clicking the link here: https://bit.ly/Mar24AnastasiInTechThe paper: https://www.nature.com/articles/s41586...
- LightmatterĀ®: no description found
- GTC March 2024 Keynote with NVIDIA CEO Jensen Huang: Watch NVIDIA CEO Jensen Huangās GTC keynote to catch all the announcements on AI advances that are shaping our future.Dive into the announcements and discove...
- Silicon Photonics: The Next Silicon Revolution?: My deepest thanks to friend of the channel Alex Sludds of MIT for suggesting this topic and helping me with critical resources. Check him out here: https://a...
- Running Neural Networks on Meshes of Light: I want to thank Alex Sludds for his efforts in helping me research and produce his video. Check out his work here: https://alexsludds.github.ioLinks:- The As...
CUDA MODE ā· #triton (7 messages):
-
Triton Debugging Visualizer Launched: A new visualizer for Triton debugging was introduced, designed to facilitate the viewing of the spatial structure of load/stores during complex function implementation. No visuals or links were provided.
-
Triton Puzzles for Practice: A set of Triton Puzzles has been created to help users familiarize with Triton, noted to be more challenging than GPU puzzles. There are 2 known bugs with the visualizer: occasional double visualization and segmentation faults.
-
Resource Request for Learning Triton: A member asked for guides or resources to learn Triton for someone already proficient in CUDA, and was directed to try the newly shared Triton Puzzles and the official website tutorials.
-
New Triton Puzzle Interpreter Feature: A new interpreter feature that runs Triton on CPU was mentioned, indicating that the puzzles do not require a GPU. A minor text correction was also pointed out: āsinceā should replace āsenseā in the instructions.
-
Positive Reception to Triton Puzzles: The introduction of Triton Puzzles was met with enthusiasm from other members, with expressions of interest in checking them out.
Link mentioned: Google Colaboratory: no description found
CUDA MODE ā· #cuda (68 messagesš„š„):
-
Understanding Warp Schedulers and Thread Optimization: A member inquired about the configuration of warp schedulers and the number of threads they control, seeking to optimize CUDA efficiency and occupancy by understanding the simultaneous thread execution capabilities.
-
Clarifying the Meaning of āActive Warpā in CUDA: Discussion revolved around what constitutes an āactive warpā within CUDA. An āactive warpā was tentatively defined as having at least one active thread, despite some debate over the technical possibility of an active warp having no active threads due to conditional code skips.
-
Managing Producer and Consumer Memory: The chat hosted an extensive discussion on memory management for producers and consumers in CUDA environments. It was clarified that ProducerProvides and ConsumerTakes are not related to memory allocation but to ownership and usage within iterations of data processing.
-
Benefits of a Memory Manager: The use of a memory manager was proposed to simplify complex memory handling across various hardware and execution scenarios, leading to clearer code with fewer explicit details, potentially avoiding idle GPU times.
-
Sharing CUDA Project Structure Best Practices: Queries concerning CUDA project structure arose, particularly regarding whether the
main()
function should be placed in a cpp or cu file and the method of including a kernel function from a cu file. This led to a broader conversation about project separation into.cuh
,.cu
, and.cpp
files.
Links mentioned:
- GitHub - tspeterkim/flash-attention-minimal: Flash Attention in ~100 lines of CUDA (forward pass only): Flash Attention in ~100 lines of CUDA (forward pass only) - tspeterkim/flash-attention-minimal
- GTC March 2024 Keynote with NVIDIA CEO Jensen Huang: Watch NVIDIA CEO Jensen Huangās GTC keynote to catch all the announcements on AI advances that are shaping our future.Dive into the announcements and discove...
CUDA MODE ā· #suggestions (5 messages):
-
Exploring Reconfigurable Computing and ML: A member shared a YouTube link to Prof. Mohamed Abdelfattahās research group at Cornell University, highlighting their work on reconfigurable computing and efficient machine learning. The research groupās website with further information was also provided: www.mohsaied.com.
-
Hardware-centric ML Course - ECE 5545 Revealed: Another link was shared detailing a masterās level course ECE 5545 (CS 5775) which focuses on machine learning systems from a hardware perspective, inclusive of optimizations like pruning and quantization and the design of both hardware and software components. The course objectives encourage understanding and optimizing ML algorithms for various hardware platforms. The full syllabus is available for those interested: Please read the syllabus.
-
Textbook Mystery in ML Course Syllabus: A point of confusion was brought up regarding the unidentified ātextbookā mentioned on the course website for ECE 5545, as it was not directly listed.
-
Locating Course Textbook in Lecture Video: The mystery of the missing textbook title for the ECE 5545 course was resolved with a tip that the first lecture video mentions it.
Links mentioned:
- ML Hardware and Systems: no description found
- Prof. Mohamed Abdelfattah: This is the channel for Prof. Mohamed Abdelfattah's research group at Cornell University. We are researching reconfigurable computing and efficient machine learning. For more information check out...
CUDA MODE ā· #jobs (1 messages):
vim410: Depends. But yes.
CUDA MODE ā· #beginner (5 messages):
-
Solid CUDA Foundation for ML Transition: A PhD student in computational fluid dynamics with beginner to intermediate CUDA skills expressed their intention to switch to CUDA computing for ML. They listed their proficiency in various CUDA concepts and planned to use the serverās lectures and Andrej Karpathyās Zero to Hero series as a learning path.
-
DL Framework Play Recommended: A member suggested to the PhD student that they should start by experimenting with a DL framework such as torch, explaining that ML/DL optimization largely revolves around matrix multiplications, nonlinearities, softmax, and normalization functions.
-
Book Recommendation for Advanced CUDA Learning: The same member recommended the book Programming Massively Parallel Processors as a resource to deepen the PhD studentās CUDA knowledge, sharing an Amazon link to the book.
-
Cued Up for General CUDA Expertise: It was noted that while Programming Massively Parallel Processors has some DL content, it is mainly an excellent resource for general CUDA programming. The emphasis is on the comprehensive nature of the book for CUDA rather than its ML/DL content.
Link mentioned: no title found: no description found
CUDA MODE ā· #pmpp-book (6 messages):
-
CUDA Indexing Confusion Resolved: A member questioned the indexing calculation in chapter 2, question 2, specifically about why not use
i = blockIdx.x * blockDim.x + threadIdx.x * 2
. It was clarified that such an approach causes double-counting of indices and an example was provided demonstrating the issue. -
Query about Blogging Exercise Solutions: A participant considered blogging their solutions to the exercises and expressed uncertainty due to lack of response from the authors about sharing potentially instructor-only content. They also shared the concern of not having an educational email address to contact the authors.
-
Potential Check with the Authorities: In response to concerns about blogging exercise solutions, a member offered to check with Wen-mei, presumably one of the authors or figures of authority related to the content, for proper guidance.
CUDA MODE ā· #ring-attention (14 messagesš„):
- Discussion on Attention Matrix Materialization: A blog post writer inquires about the memory requirements for the attention matrix in blockwise attention methods, citing potential confusion in Liuās papers regarding Ring Attention. It is mentioned that Liu states an advantage of linear memory scaling with the block size c, but questions arise when comparing this to the squared chunklength memory requirement mentioned for FlashAttention.
- Seeking Clarification on Memory Scaling: A discussion unfolds regarding the memory scaling properties of attention mechanisms, especially under FlashAttention and RingAttention. Members acknowledge a potential discrepancy between the perceived linear memory scaling and the actual implementation within SRAM constraints, indicating complexities in chunk size management.
- Striped Attentionās Improved Workloads: A member provides a link to a paper that introduces Striped Attention, which addresses workload imbalances in causal transformer models by distributing tokens uniformly and achieving better throughput.
- Dissecting FlashAttention Optimization: Members delve into the inner workings of FlashAttentionās computation and memory savings, suggesting that sequential updates with kv-blocks on q-output pairs may avoid the need to form a c^2 matrix. Thereās a shared interest in thoroughly understanding and potentially becoming experts on FlashAttention.
- Potential Misinterpretation in Ring Attention Memory Requirements: In trying to make sense of the memory requirement statements, members speculate that there might be a misunderstanding with the term linear scaling. Suggestions hint towards linear scaling possibly being in relation to the number of blocks, rather than the block size itself.
Links mentioned:
- Striped Attention: Faster Ring Attention for Causal Transformers: To help address the growing demand for ever-longer sequence lengths in transformer models, Liu et al. recently proposed Ring Attention, an exact attention algorithm capable of overcoming per-device me...
- flash-attention/csrc/flash_attn/src/flash_fwd_kernel.h at main Ā· Dao-AILab/flash-attention: Fast and memory-efficient exact attention. Contribute to Dao-AILab/flash-attention development by creating an account on GitHub.
- add naive triton kernel for varlen Ā· zhuzilin/ring-flash-attention@10d992c: no description found
CUDA MODE ā· #off-topic (5 messages):
-
MLSys 2024 on the Horizon: Members are preparing for the MLSys 2024 conference in May, a key event at the intersection of Machine Learning and Systems. The conference emphasizes interdisciplinary focus and optimization of AI systems as integral to developing efficient and effective AI.
-
Poetic Conference Titles: A comment humorously observed that the phrase āThe Conference for the Era of AIā from the MLSys conference description fits iambic pentameter, a common meter in poetry.
-
Smartphone Woes: A member made a brief quip about a phone not meeting expectations, mockingly calling it a āNot so smart phone.ā
-
Math Mysteries on Smartphones: A discussion emerged about the correct way to interpret mathematical operations on smartphones, with one member asserting that the order of multiplications/divisions should be from left to right.
-
Calculator Conundrums Solved: The correct treatment of mathematical expressions on calculators was clarified, noting that scientific calculators differentiate between expressions like
ax
andaĆx
.
Link mentioned: MLSys 2024: no description found
CUDA MODE ā· #gtc-meetup (9 messagesš„):
- Marksaroufim is GTC-bound on Monday: Marksaroufim announced plans to be at GTC on Monday morning and offered to share contact details for meetups.
- Neurondeepās Extended GTC Visit: Neurondeep plans to be present from March 14th to 25th and intends to attend every day of the event.
- t_vi Eyes on GTC Meetup: t_vi is also attending and is looking forward to meeting up with others.
- Marksaroufim Might Stay the Week: Initially planning for 1-2 days, Marksaroufim might end up staying the entire week at GTC, hoping for ādecent wifiā.
- Mr.osophy Tackles GTC Meme Culture: Mr.osophy humorously suggests there should be memes about being unable to attend GTC and shared a past failed attempt to volunteer for a free pass.
- Iron_bound Shares Sneaky Inspiration: In response to the discussion, iron_bound posted a YouTube video link as a humorous suggestion to sneak into events like GTC: āI Snuck Into A Secret Arms-Dealer Conferenceā.
Link mentioned: I Snuck Into A Secret Arms-Dealer Conference: Get an exclusive video every month at https://www.patreon.com/Boy_BoyWe made this in collaboration with the legendary Australian political satire group The Cā¦
OpenRouter (Alex Atallah) ā· #general (159 messagesš„š„):
-
Prompt Structure Clarification for Llama Models: A user asked if a specific JSON format containing āsystemā, āuserā, and āassistantā keys could be used with llama models, while using the OpenAI JavaScript library. The inquiry was confirmed with a simple āYes!ā by another participant.
-
Top Up to Pay: When asked about the payment method for using the chatbot models due to the absence of a required credit card connection, a user was advised to simply ātopup your balance.ā
-
Model Preference for Roleplay Consistency: Amidst discussions on the most consistent AI models for roleplay that avoid repetition and randomness, Sonnet was lauded as the best, with a confirmation that āitās not even close.ā
-
Effective Prompt Formation for Chat Models: In a discussion about chat models disregarding system messages, it was suggested to place the system message first. A user shared insights that often a systemās context can be established within user or assistant messages, ultimately advising to follow the principle of ājailbreakingā the model.
-
Analyzing Book Segments for Prompts via Automation: A user mentioned running a script that breaks down books and queries a model for corresponding prompts, using JSON formatted outputs. They noted that lzlv 70B yields better and more consistent results without errors compared to other models, which sometimes produce irrelevant data.
Links mentioned:
- OpenRouter: A router for LLMs and other AI models
- GitHub - xai-org/grok-1: Grok open release: Grok open release. Contribute to xai-org/grok-1 development by creating an account on GitHub.
LangChain AI ā· #general (95 messagesš„š„):
- Seeking Clarity on API Futures: A member inquired about whether to use
astream_log
orastream_events
and questioned if one might be deprecated in favor of the other, recognizing thatastream_events
is currently in beta. - Upcoming Advanced Research Assistant & Search Engine: paul16307 announced an advanced research assistant and search engine project, inviting members to become beta testers and offering 2 months of free premium access. The service includes models like Claude 3 Opus, GPT-4 Turbo, and Mistral Large, hosted on Groqās servers, and a sign-up waitlist is available at Rubikās AI.
- Documentation Dive: Community members discussed the complexity of LangChain documentation, with namenot223_69478 expressing that the docs are challenging for beginners. evolutionstepper advised reading the code and API reference after getting comfortable with the basics, while .bagatur requested feedback on specific confusing pages or suggestions for new ones.
- Call for Beta Testers for AI-Powered Tutor: theoneandonly414 advertised an AI-powered tutor and AI whiteboard, asking members to join the waitlist at Bloon AI. While they are part of the operations team, the founder is handling development, and funding is currently being raised.
- Query on Structured LLM Output: gryhkn requested help generating structured output from LLMs using LangChain, prompting widium to provide a comprehensive Python example utilizing Pydantic for city data output, a demonstration appreciated by gryhkn.
Links mentioned:
- Rubik's AI - Waitlist: no description found
- Bloon AI: Redefining Intelligent Learning
- no title found: no description found
- Feature Request: Support for Negative Embeddings in Similarity Searches Ā· langchain-ai/langchain Ā· Discussion #19239: Checked I searched existing ideas and did not find a similar one I added a very descriptive title I've clearly described the feature request and motivation for it Feature request I propose adding ...
- Using Natural Language to Query Teradata VantageCloud With LLMs| Teradata: Learn to translate your English queries into SQL and receive responses from your analytic database in plain English.
LangChain AI ā· #langserve (45 messagesš„):
- Trouble Streaming with RemoteRunnable: A member is facing issues with RemoteRunnable not streaming output when used in JavaScript. The code is expected to call
/stream
, but it only calls/invoke
even though the same approach works correctly in Python. - Searching for a Solution: Theyāve verified that the Python implementation of RemoteRunnable streams properly and are trying to write equivalent code in JavaScript using LangChain.js.
- Reaching Out for Help: The member is advised on how to reach the LangChain team for support, including reporting an issue on GitHub and contacting via email.
- No Known Fixes in Recent Changes: Finally, the member inquires if there were any changes in the last month that may have resolved streaming issues; however, there is no information on recent updates that could have addressed the problem.
Links mentioned:
- Security | š¦ļøš Langchain: LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applicat...
- RemoteRunnable | LangChain.js - v0.1.28: no description found
- Issues Ā· langchain-ai/langchain: š¦š Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues Ā· langchain-ai/langchain: š¦š Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues Ā· langchain-ai/langchain: š¦š Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues Ā· langchain-ai/langchain: š¦š Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues Ā· langchain-ai/langchain: š¦š Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- langchain_core.runnables.config.RunnableConfig ā š¦š LangChain 0.1.4: no description found
- langchain_core.runnables.base ā š¦š LangChain 0.1.4: no description found
LangChain AI ā· #share-your-work (11 messagesš„):
- AI Chatbot for Data Analysis: A new GitHub repository langchain-chatbot has been introduced, featuring an AI chatbot designed for analyzing and extracting information from data in conversational format.
- Personal Bookmark Assistant is Born: The development of a Discord AI chatbot for managing Raindrop.io bookmarks has been announced, with the project available on GitHub as open source.
- Seeking Input for Health & Productivity Digital Advisor: An invitation has been extended for individuals working in tech or professional services to join a 30-minute call to contribute insights for developing a digital advisor, with bookings available via Calendly.
- Scrapegraph-ai: A Python scraper powered by AI and LangChain, capable of scraping content with API keys, has been launched. The tool already boasts over 2300 installations and can be found on GitHub.
- Automating Sales and Research with AI: A challenge to build an AI capable of replacing the SDR/AE role led to the creation of a multi-agent automation framework utilizing Lyzr Automata and OpenAI hosted on AWS. The project can be explored on GitHub.
Links mentioned:
- User Interview š - NEUROFUSION Research, Inc.: Hey, I'm building a digital advisor to help improve how you show up to work and other areas of your life. I'd love to speak with you to learn about your needs around productivity, physical and...
- GitHub - Haste171/langchain-chatbot: AI Chatbot for analyzing/extracting information from data in conversational format.: AI Chatbot for analyzing/extracting information from data in conversational format. - Haste171/langchain-chatbot
- GitHub - VinciGit00/Scrapegraph-ai: Python scraper based on AI: Python scraper based on AI. Contribute to VinciGit00/Scrapegraph-ai development by creating an account on GitHub.
- Tweet from Siva Surendira (@siva_1gc): It took a bit more time than we thought.. But here it is.. š Automation of SDR & AE function with @lyzrai Automata and @OpenAI... Runs on @awscloud - secure and private.. How it works? š Agent 1:...
- GitHub - LyzrCore/lyzr-automata: low-code multi-agent automation framework: low-code multi-agent automation framework. Contribute to LyzrCore/lyzr-automata development by creating an account on GitHub.
- Making an AI application in 15 minutes: Stack- Custom UI and RAG: A tweaked version of Open-webui- Local LLM Hosting: Ollama for locally hosted LLMs.- Data Privacy: Integrates Pebblo by DaxaAI to e...
- Home: Iām deeply passionate about AI. Letās connect to unlock AIās potential and collaborate on innovative projects!
- no title found: no description found
- no title found: no description found
- no title found: no description found
LangChain AI ā· #tutorials (2 messages):
-
Nutriheal - AI-Driven Personalized Nutrition: A personalized nutrition AI app for patients, Nutriheal, was highlighted showing the use of Ollama, Open-webui, and Langchainās Pebblo integration by Daxa AI. Watch the creation process in āMaking an AI application in 15 minutesā on YouTube and explore more at navvy.co.
-
Plan-and-Execute with Langgraph: A YouTube video called āPlan-and-Execute using Langgraphā shares insights on creating a plan-and-execute style agent, drawing inspiration from the Plan-and-Solve paper and the Baby-AGI project. The method involves generating a plan before executing it in the environment.
Links mentioned:
- Plan-and-Execute using Langgraph: how to create a "plan-and-execute" style agent. This is heavily inspired by the Plan-and-Solve paper as well as the Baby-AGI project.The core idea is to firs...
- Making an AI application in 15 minutes: Stack- Custom UI and RAG: A tweaked version of Open-webui- Local LLM Hosting: Ollama for locally hosted LLMs.- Data Privacy: Integrates Pebblo by DaxaAI to e...
- Home: Iām deeply passionate about AI. Letās connect to unlock AIās potential and collaborate on innovative projects!
Interconnects (Nathan Lambert) ā· #other-papers (8 messagesš„):
-
Deciphering API-Protected LLMs: A new study outlined in an arXiv paper demonstrates that it is possible to extract substantial non-public information about commercially available LLMs through API calls. The research relies on the softmax bottleneck characteristic of modern LLMs, which suggests model outputs are restricted to a linear subspace, allowing for various analytical advantages, including estimating an LLMās hidden size.
-
Estimation of GPT-3.5ās Size Causes Stir: Discussion in the channel revolves around recent findings that estimate OpenAIās gpt-3.5-turbo to be around 7B in size. A user expresses surprise at this number, while others are skeptical about the accuracy of this estimation.
-
Skepticism Over Model Size Accuracy: Nathan Lambert, a participant in the conversation, suggests that the 7B size estimation for gpt-3.5-turbo might be incorrect. There is an implication that the estimation doesnāt hold up if gpt-3.5 is inferred to be a Mixture of Experts (MoE) model.
-
Potential Disruption by Distillation Techniques: The skepticism continues with a hypothesis that unless there is some āmega distillation sauce,ā the size estimate for gpt-3.5-turbo is likely erroneous. This suggests reliance on advanced techniques to justify the modelās capabilities given the estimated size.
-
Performance Boost through Model Mixture Strategy: A user points to a strategy mentioned in a paper from the previous year, suggesting that performance can be significantly enhanced by using outputs from a more capable model such as GPT-4 at the start and then continuing with a different model like llama for processing math problems. This might indicate a similar approach could be in use with turbo models.
Link mentioned: Logits of API-Protected LLMs Leak Proprietary Information: The commercialization of large language models (LLMs) has led to the common practice of high-level API-only access to proprietary models. In this work, we show that even with a conservative assumptionā¦
Interconnects (Nathan Lambert) ā· #ml-drama (19 messagesš„):
- Twitter Drama Anticipation: A member linked to a Twitter post by Sebastian Raschka suggesting it could stir up drama within the machine learning community.
- Open Source Definition Debate: There was a discussion regarding the Open Source Software (OSS) community potentially crystallizing a stance on what qualifies as open source, especially relating to whether data should be included.
- Shared Understanding for Open Source: Another conversation took place about the need for a shared understanding within open source licensing, highlighting the differences between Apache 2.0 and GPLv3 licenses.
- Drama Unfolds on Twitter: The member referenced a Tweet by @BlancheMinerva, indicating an ongoing disagreement which seems to add to the drama.
- Frustration and Disengagement from Twitter: A member expressed frustration with EleutherAIās approach to online discussions, mentioning that they would limit their Twitter activity and focus more on blogging after a stressful interaction.
Link mentioned: Tweet from Stella Biderman (@BlancheMinerva): @natolambert @felix_red_panda Youāre wrong though :P
Interconnects (Nathan Lambert) ā· #random (63 messagesš„š„):
-
Grok-1 Model Weights Released by xAI: The 314B parameter Mixture-of-Experts model Grok-1ās weights, trained by xAI, have been released as open source under the Apache 2.0 license. This release only includes the base model, not fine-tuned, and can be found on GitHub.
-
Grok-1 Size Surprises Community: The community expressed surprise at the 314B size of the Grok-1 model; discussions revolved around its potential training dataset size, given the modelās large scale and its possible connection to Chinchillaās principles.
-
Grok Compared to Other Models: Grok-1ās metrics were juxtaposed against another model, Falcon, with users noting that Grok indeed has better performance on benchmarks like GSM8K and MMLU.
-
Release Mechanism Sparks Concern: The distribution of Grok-1 via a magnet link sparked discussion about the right way to release models, touching upon policy considerations and the credibility of āopenā teams. This highlighted a general concern about how powerful models should be made available to the public.
-
Novel Distribution Ideas: A humorous speculation on the feasibility of mailing physical drives with model weights came up as a comparison against the cost of cloud services, leading to a wider conversation about the practicalities of model distribution.
Links mentioned:
- Open Release of Grok-1: no description found
- Have We Reached Peak AI?: Last week, the Wall Street Journal published a 10-minute-long interview with OpenAI CTO Mira Murati, with journalist Joanna Stern asking a series of thoughtful yet straightforward questions that Murat...
- Tweet from Grok (@grok): @elonmusk @xai āWāEāIāGāHāTāSāIāNāBāIāOā
- Tweet from Xeophon (@TheXeophon): Chinchilla doesnāt apply to MoE directly, does it? If it does, we can infer the training data set size for Grok. Itās unexpectedly large, so I guess they went for optimality first, given the little ti...
Alignment Lab AI ā· #general-chat (6 messages):
- Exploring Aribus Applications: A member shared a Twitter link to Aribus, prompting discussion about what can be built with it. Follow-up messages indicated some confusion about the postās content.
- Search for HTTP Response-Trained Embeddings Model: A member inquired about an embeddings model specifically trained on HTTP responses, expressing uncertainty on where to start looking.
- Transformers as Embeddings Models: The same member pondered whether any transformer model trained accordingly could serve as an embeddings model.
- Request for Mistral Fine-Tuning on Specific Datasets: A member asked if anyone has a version of the Mistral model fine-tuned on both the orca-math-word-problems-200k dataset and nvidia/OpenMathInstruct-1 dataset.
- A Friendly Hello: A user briefly greeted the chat with āhiā.
Alignment Lab AI ā· #oo (32 messagesš„):
- Taming Grok 1 Requires Heavy Resources: A member mentioned Grok 1 as a āhuge beastā that only OpenAI and XAI have tried to manage. They indicated a significant need for resources, including a high amount of compute power, data, and time dedicated to experiments.
- Efficient MoE Training Infrastructure in Place: Existing infrastructure for training MoE (Mixture of Experts) models at almost 100% efficiency was mentioned, but there is a pressing need for computational resources.
- Mixed Views on Training Grok 1: While some members are willing to contribute to fine-tuning Grok 1, others questioned its performance relative to costs and suggested alternative models might offer better efficiency for the investment.
- Concerns Over Grok 1 Performance and Data Quality: Discussions highlighted the challenges Grok 1 faces, particularly with the mmlu benchmark and a poorly performing MoE model. Questions arose on the optimal data mix and the timeline for achieving results, as well as who would benefit from advancing this model.
- Comparison with High-Performance Models: Grok 1 was compared to industry-leading models like GPT-4 and Claude, stating that while it may not surpass GPT-4, it has the potential to be the best open-source large language model, possibly outperforming Claude 2.
Link mentioned: keirp/hungarian_national_hs_finals_exam Ā· Datasets at Hugging Face: no description found
LLM Perf Enthusiasts AI ā· #general (1 messages):
- The Lazy Developerās Creed: A member expressed that Devin has inspired a minimalist approach to app development, favoring tools that avoid manual tasks like pasting into terminals. They argue that anything more complex than a local app with filesystem control is unnecessarily complicated, suggesting dissatisfaction with current open-source options.
LLM Perf Enthusiasts AI ā· #claude (7 messages):
-
Tweet Sparks Controversy: Recent tweet by @tszzl suggests that Anthropic might be playing the role of ācontrolled oppositionā to instill fear among technical staff workers. The exact motive and accuracy of this claim were discussed, with a member affirming the tweetās veracity.
-
Content Moderation Woes: A member reported difficulties with content moderation on images containing people, mentioning that sometimes the system just refuses to operate correctly.
-
Curiosity about Claude: Claude Sonnetās efficacy at large scale usage was inquired about, particularly for a project that could involve several dozen million tokens per month.
Link mentioned: Tweet from roon (@tszzl): anthropic is controlled opposition to put the fear of god in the members of technical staff
LLM Perf Enthusiasts AI ā· #reliability (16 messagesš„):
-
Introducing the Knowledge Processing Unit (KPU): Maisa announces the KPU, a new framework claimed to enhance LLMs by separating reasoning and data processing, aiming to solve complex tasks. The KPU is reported to outperform well-known models like GPT-4 and Claude 3 Opus according to Maisaās blog post.
-
Benchmarks Stir Skepticism: Members expressed skepticism regarding Maisaās benchmark comparisons. They questioned why KPU + GPT-4-turbo was compared to GPT-4 without turbo, instead of comparing like versions.
-
Dissecting the KPU Architecture: The technology behind KPU is believed to include self-evaluation techniques and innovative context window management. However, thereās still confusion among members about the specificity of the tech.
-
Rate of return questionable: Concerns raised about the absence of latency information for the KPU, highlighting the trade-off between improved performance on tasks like MATH and the potential increase in latency affecting practical application in products.
-
CEO Clarifies KPU Functionality: Maisaās CEO took to Twitter to explain that KPU is an AI architecture enhancing LLMs, acting like a GPU for knowledge management without chunking or embeddings. Further, the CEO offers an API key for researchers wishing to conduct their own evaluations and a notebook, detailed in a tweet by @davipar.
Links mentioned:
- Tweet from David Villalón (@davipar): happy to answer! it is not a new model, indeed KPU is agnostic to intelligence providers (OpenAI, Antrophic...). It is a new AI architecture to work with LLMs that leverages their reasoning capabiliti...
- KPU - Maisa: AI-Powered Knowledge Processing Platform. A simple API for executing business tasks. Abstracting the complexities of using the latest AI architectures for software and app developers
LLM Perf Enthusiasts AI ā· #openai (1 messages):
res6969: https://x.com/leopoldasch/status/1768868127138549841?s=46
DiscoResearch ā· #general (21 messagesš„):
-
DiscoLM Struggles with German Responses: DiscoResearch/DiscoLM-mixtral-8x7b-v2 model fails to generate a response in German even after instruction fine-tuning on a specific dataset. The issue also extends to a ValueError encountered during sequence classification fine-tuning.
-
Technical Difficulties with Local Deployment: A user faced problems when running DiscoLM-70b through
vllm
and experienced hanging issues on a machine with ample resources. Sample server set-up and response codes provided may indicate potential compatibility issues. -
Insights into German Language Models: A Reddit post provides feedback on various German language models, with observations on the performances of SauerkrautLM Una SOLAR Instruct, Starling LM 7B Alpha, among others. Images from the Reddit thread illustrate usersā experiences with the mentioned models (Reddit Feedback).
-
Grok Model Discussion: A member shared a GitHub link to the Grok model, discussing its size of 314B parameters and pondering who might be able to run it, with another member noticing it needs only 300GB.
-
Exploration of Benchmarks for German LLMs: Discussions included potential improvements for German language models, a potential German language benchmark paper called supergleber-german-language-evaluation-benchmark, and suggestions to engage with university research resources. Links to related benchmarks were shared, suggesting an opportunity for collaboration (Informatik Uni Würzburg, OpenReview).
Links mentioned:
- Reddit - Dive into anything: no description found
- Reddit - Dive into anything: no description found
- grok-1/model.py at e50578b5f50e4c10c6e7cff31af1ef2bedb3beb8 Ā· xai-org/grok-1: Grok open release. Contribute to xai-org/grok-1 development by creating an account on GitHub.
- grok-1/model.py at main Ā· xai-org/grok-1: Grok open release. Contribute to xai-org/grok-1 development by creating an account on GitHub.
- Our Paper "SuperGLEBer: German Language Understanding Evaluation Benchmark" was accepted at the NAACL 2024: In our paper, we assemble a broad Natural Language Understanding benchmark suite for the German language and consequently evaluate a wide array of existing German-capable models in order to create a b...
- Reddit - Dive into anything: no description found
- ChuckMcSneed/WolframRavenwolfs_benchmark_results Ā· Datasets at Hugging Face: no description found
- GitHub - KLUE-benchmark/KLUE: š Korean NLU Benchmark: š Korean NLU Benchmark. Contribute to KLUE-benchmark/KLUE development by creating an account on GitHub.
- GitHub - facebookresearch/belebele: Repo for the Belebele dataset, a massively multilingual reading comprehension dataset.: Repo for the Belebele dataset, a massively multilingual reading comprehension dataset. - facebookresearch/belebele
- GitHub - google-research/xtreme: XTREME is a benchmark for the evaluation of the cross-lingual generalization ability of pre-trained multilingual models that covers 40 typologically diverse languages and includes nine tasks.: XTREME is a benchmark for the evaluation of the cross-lingual generalization ability of pre-trained multilingual models that covers 40 typologically diverse languages and includes nine tasks. - goo...
DiscoResearch ā· #discolm_german (4 messages):
-
Disco Bot Sticks to the Script: A member noted that the model is designed to adhere to the system prompt, but variations may be tested for optimal results. The demo does not use special settings and operates on fastchat/vllm.
-
Server Migration Mixed Success: Thereās been a server transfer for the demo, moving from a home kitchen setup to a more robust location. However, the transition has faced networking issues, with hopes to resolve them early next week.
-
Kitchen vs Professional Hosting ā A Wry Observation: A user humorously commented that a hobbyist server in a kitchen may run without a hitch, whereas a professional hosted server is more prone to problems like networking issues and hardware failures.
Datasette - LLM (@SimonW) ā· #ai (20 messagesš„):
-
Prompt Engineering Tools by Explosion: A member mentioned having developed tools to assist with prompt engineering, integrated into the Prodigy product by Explosion, suggesting the setup can be insightful for others interested in prompt engineering, available at Prodigy Features.
-
Prompt Engineering as Data Annotation: The same member described prompt engineering as a data annotation problem, recognizing its limitations but also acknowledging the utility of such an approach in certain contexts.
-
Open-source Prompt Testing Tool: Another member discussed trying out the open-source tool PromptTools found on GitHub, which allows for prompt testing and experimentation but lacks version management.
-
Vercelās Prompt Comparison Interface: The Vercel AI SDK was mentioned for its user-friendly interface to compare model responses for a single prompt, with a follow-up observation noting its similarity to the PromptTools playground.
-
Helicone AI for Dreamy Prompt Management: Helicone AI was introduced as the closest available solution to one memberās dream of a comprehensive prompt management system, though at the time of discussion it was only beginning to develop prompt management and analysis features.
-
Innovative Experiments in Content Translation: A member shared an experiment in content translation using different personas, with a proof of concept built on GPT-3.5-turbo. They provided a link to an example blog post with a translation experiment at the bottom: How to Build a Buzzword.
Links mentioned:
- How to Build a Buzzword: And why theyāre so powerful
- Helicone: How developers build AI applications. Get observability, tooling, fine-tuning, and evaluations out of the box.
- Vercel AI SDK: Build AI-powered applications with the latest AI language models
- GitHub - hegelai/prompttools: Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chroma, Weaviate, LanceDB).: Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chroma, Weaviate, LanceDB). - hegelai/prompttools
- GitHub - promptfoo/promptfoo: Test your prompts, models, RAGs. Evaluate and compare LLM outputs, catch regressions, and improve prompt quality. LLM evals for OpenAI/Azure GPT, Anthropic Claude, VertexAI Gemini, Ollama, Local & private models like Mistral/Mixtral/Llama with CI/CD: Test your prompts, models, RAGs. Evaluate and compare LLM outputs, catch regressions, and improve prompt quality. LLM evals for OpenAI/Azure GPT, Anthropic Claude, VertexAI Gemini, Ollama, Local &...
Datasette - LLM (@SimonW) ā· #llm (1 messages):
obra: Is it possible to recover the seed used by the openai models for a previous api request?
Skunkworks AI ā· #general (17 messagesš„):
-
Promising Method Awaiting Release: baptistelqt mentions preparing a paper/article about a method that improves global accuracy and sample efficiency in model training. They are working on better charts and results structure before release.
-
Challenges Scaling to Large Models: The member baptistelqt indicates a need to test their method for effectiveness on larger models, citing a lack of resources to empirically prove it at scale.
-
Seeking Compute Resources for Scaling: far_el. expresses interest in baptistelqtās work and offers to discuss potential allocation of compute and resources to scale the unproven method.
-
Calls for Collaboration on Quiet-STaR: satyum inquires about joining the āQuiet-STaR: Language Models Can Teach Themselves to Think Before Speakingā project, with far_el. responding by asking about proficiency in PyTorch and transformer architectures.
-
Evidence of Methodās Impact on VGG16: baptistelqt shares preliminary results comparing their method against base training on VGG16 with a subset of CIFAR100, showing improved test accuracy from 0.04 to 0.1 after just 1 epoch.
Skunkworks AI ā· #off-topic (1 messages):
pradeep1148: https://www.youtube.com/watch?v=ZlJbaYQ2hm4