Just 3 months after the Series B, Perplexity doubles its valuation again with a Series B-1, with mostly the same list of stellar investors as last time, but a rare split of Daniel Gross not co-leading with Nat Friedman. Dan seems to have a special relationship with the company - Aravind shared a Dec 2022 email on Danâs product feedback.
Table of Contents
[TOC]
AI Reddit Recap
Across r/LocalLlama, r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity. Comment crawling works now but has lots to improve!
Llama 3 Variants and Optimizations
- Context Length Extension: In /r/LocalLLaMA, the context length of Llama-3-8B has been extended to 16K tokens, doubling its original context window.
- Multimodal LLaVA Models: The XTuner team has released LLaVA models based on Llama 3 on Hugging Face, which substantially outperform Llama 2 on various benchmarks.
- BOS Token Reminder: In /r/LocalLLaMA, a PSA reminds users to ensure their training setups add the BOS token when finetuning Llama 3 models to avoid issues like inf grad_norm or higher loss.
- Special Token Embedding Adjustments: Adjustments have been made to the untrained special token embeddings in Llama-3-8B and shared on Hugging Face to address finetuning issues caused by zero values.
- Web browsing and interaction: In /r/LocalLLaMA, Llama-3-8B-Web action model introduced for web browsing and user interaction. WebLlama project aims to advance Llama-based agent development. Demos of voice chatting with Llama 3 8B using OpenAI TTS and Whisper shared.
- Fine-tuning and extensions: QDoRA introduced for memory-efficient and accurate fine-tuning of Llama 3 models, outperforming QLoRA and Llama 2. Hugging Face Space for creating GGUF quantizations of Llama 3 models shared. Importance of adding BOS token when fine-tuning Llama 3 discussed.
Llama 3 Performance and Capabilities
- Instruction Following: In /r/LocalLLaMA, Llama-3-70B is praised for its ability to follow format instructions and provide concise responses without unnecessary boilerplate text.
- Model Comparison: An in-depth comparison of 20 Llama 3 Instruct model versions across HF, GGUF, and EXL2 formats at various quantization levels is shared in /r/LocalLLaMA. Key findings include EXL2 4.5bpw and GGUF 8-bit to 4-bit performing exceptionally well, while 1-bit quantizations showed significant quality drops.
- Groq-Hosted Model Performance: The Groq-hosted Llama-3-70B struggles with a lateral thinking puzzle compared to the HuggingChat version, as reported in /r/LocalLLaMA. Temperature settings significantly impact reasoning performance, with 0.4 providing the best consistency.
Phi-3 and Llama 3 Models Push Boundaries of Open-Source Language AI
-
Phi-3 models released in 3.8B, 7B, and 14B sizes: In /r/singularity, Meta released Phi-3 models trained on heavily filtered web data and synthetic data. The 14B model claims 78% on MMLU, rivaling Llama 3 8B despite smaller size. Weights coming to Hugging Face soon.
-
Phi-3 3.8B nears GPT-3.5 performance: In /r/singularity, the Phi-3 3.8B model is nearing GPT-3.5 performance on benchmarks, with 7B and 14B versions also available. Weights releasing with a demo video, showing mind boggling progress in model efficiency.
-
Llama 3 70B ties GPT-4 on LMSYS leaderboard: In /r/singularity, Llama 3 70B took second place on the LMSYS arena English leaderboard, tying GPT-4-Turbo for first. It can be used for free through Groq API or Hugging Face. Questions raised about arena ranking validity.
-
Phi-3 technical report shows impressive benchmarks: In /r/singularity, the Phi-3 technical report was released showing the 3.8B model rivaling Mixtral 8x7B with 69% MMLU and 8.38 MT-bench. The 7B and 14B models show further scaling to 75% and 78% MMLU.
-
Doubling parameters yields diminishing returns for Llama 3: In /r/singularity, a chart showed that doubling parameters on the same dataset scales MMLU scores by an average 17%, but only 5% for Llama 3 models, suggesting Llama 3 is highly optimized already.
Miscellaneous
- Parameter Scaling: According to an image shared on Reddit, doubling model parameters on the same dataset typically scales MMLU performance by 17% on average, but only 5% for Llama 3 models.
- High-Speed Inference: SambaNova Systems demonstrates high-speed inference of 430 tokens per second for Llama 3 8B using 8 chips with FP16 precision, as reported in /r/LocalLLaMA.
- Quantization Democratization: A Hugging Face Space is introduced in /r/LocalLLaMA to democratize the creation of GGUF quantizations for Llama 3 models, improving reliability and accessibility.
AI Twitter Recap
all recaps done by Claude 3 Opus, best of 4 runs. We are working on clustering and flow engineering with Haiku.
Perplexity AI Raises $62.7M at $1.04B Valuation
- Funding Details: @AravSrinivas and @perplexity_ai announced Perplexity AI raised $62.7 million in a Series B1 funding round at a $1.04 billion valuation, led by Daniel Gross, along with investors including Stan Druckenmiller, NVIDIA, Jeff Bezos, Tobi Lutke, Garry Tan, Andrej Karpathy, Dylan Field, Elad Gil, Nat Friedman, IVP, NEA, Jakob Uszkoreit, Naval Ravikant, Brad Gerstner and Lip-Bu Tan.
- Growth and Partnerships: Since January 2024, Perplexity has grown to serve 169M queries per month, over 1 billion queries in the last 15 months. Perplexity has partnerships with Deutsche Telekom and Softbank to distribute to ~116M users worldwide. @AravSrinivas
- Perplexity Enterprise Pro Launch: Perplexity is launching Perplexity Enterprise Pro, which comes with SOC2 compliance, SSO, user management, enterprise-grade data retention, and security warnings to address data and security concerns for enterprise use. @AravSrinivas, @perplexity_ai
Metaâs Llama-3 Model Achieves Top Performance
- Llama-3 Performance: Metaâs Llama-3 70B model has reached top-5 on the Arena leaderboard, surpassing many larger models. The 8B variant has also surpassed many larger models. @lmsysorg
- Training Details: Llama-3 models were trained on over 15T tokens of data and aligned using SFT, rejection sampling, DPO, and PPO. @lmsysorg
- English Performance: Llama-3 70B shows even stronger performance in the English category, ranking ~1st place with GPT-4 Turbo. It consistently performs well against top models by human preference. @lmsysorg
Microsoft Releases Phi-3 Language Models
- Phi-3 Model Details: Microsoft released the Phi-3 language models in 3 sizes: phi-3-mini (3.8B), phi-3-medium (14B), and phi-3 (7B). Phi-3-mini rivals Mixtral 8x7B and GPT-3.5 despite its small size. @arankomatsuzaki
- Training Data: Phi-3 models were trained on 3.3T tokens (mini) and 4.8T tokens (small/medium) using âheavily filtered web data and synthetic dataâ. @arankomatsuzaki
- Benchmark Performance: Phi-3-mini achieves 68.8 on MMLU and 8.38 on MT-bench. Phi-3-medium achieves 78% on MMLU and 8.9 on MT-bench, outperforming GPT-3.5. @arankomatsuzaki, @_akhaliq
- Availability: Phi-3-mini weights were released under MIT license on Hugging Face. It is optimized for use with Hugging Face text generation inference. @_philschmid
Googleâs Gemini 1.5 Pro Achieves Strong Performance
- Gemini 1.5 Pro Performance: Googleâs Gemini 1.5 Pro API now achieves #2 on the leaderboard, surpassing GPT-4-0125 to almost reach the top spot. It shows even stronger performance on longer prompts, ranking joint #1 with GPT-4 Turbo. @lmsysorg
Other Notable Releases and Benchmarks
- Hyper-SD from ByteDance: ByteDance released Hyper-SD, a novel framework for multi-concept customization in image generation that achieves SOTA performance from 1-8 inference steps. @_akhaliq
- FlowMind from JP Morgan: JP Morgan introduced FlowMind, which leverages GPT to automatically generate workflows for Robotic Process Automation (RPA) tasks. @_akhaliq
- Instruction Hierarchy from OpenAI: OpenAI proposed an Instruction Hierarchy to make LLMs prioritize privileged instructions and be more robust to prompt injections and jailbreaks. @_akhaliq
AI Discord Recap
A summary of Summaries of Summaries
1. Evaluating and Comparing Large Language Models
-
Discussions around the performance and benchmarking of the newly released Phi-3 and LLaMA 3 models, with some skepticism expressed about Phi-3âs evaluation methodology and potential overfitting on benchmarks like MMLU.
-
Comparisons between Phi-3, LLaMA 3, GPT-3.5, and models like Mixtral across various tasks, with Phi-3-mini (3.8B) showing impressive performance relative to its size.
-
Debates around the validity and usefulness of benchmarks like MMLU, BIGBench, and LMSYS for evaluating true model capabilities, with suggestions that they may become less reliable as models improve.
-
Anticipation for the open-source release of Phi-3 under an MIT license, along with its promised multilingual capabilities.
2. Advancements in Retrieval-Augmented Generation (RAG)
-
LlamaIndex introduced DREAM, a framework for experimenting with Distributed RAG, aiming to build robust, production-ready RAG systems.
-
Discussions on innovative RAG techniques like Superposition Prompting for efficient long context processing, CRAG for improving retrieval quality, and RAG with function calling.
-
Sharing of resources on RAG evolution, credibility-aware generation, and integrating retrieval with LLM planning for structured outputs.
-
Releases of open-source rerankers by @JinaAI_ to enhance RAG performance through improved vector search ranking.
3. Fine-tuning and Optimizing Large Language Models
-
Extensive discussions on fine-tuning strategies for LLaMA 3 using tools like Unsloth, addressing issues like tokenizer configurations, efficient merging of LoRA adapters, and embedding knowledge.
-
Comparisons between full fine-tuning, QLoRA, and LoRA approaches, with QLoRA research suggesting potential efficiency gains over LoRA.
-
Implementing mixed-precision training (BF16/FP16) for llm.c showing ~1.86x performance improvement over FP32, as detailed in PR #218.
-
Optimizations in llm.c like CUDA kernel improvements (GELU, AdamW) using techniques like thread coarsening to enhance memory-bound kernel performance.
4. Multimodal and Vision Model Developments
-
The introduction of Blink, a new benchmark for evaluating the core visual perception abilities of multimodal large language models like GPT-4V and Gemini.
-
Releases like HiDiffusion claiming to increase diffusion model resolutions with a single line of code, and PeRFlow for upsampling images through flow integration.
-
The unveiling of SEED-X, a multimodal foundation model bridging the gap by comprehending and generating images of arbitrary sizes for real-world applications.
-
Advancements in Mixture-of-Attention (MoA) architecture for disentangled, personalized image generation from language.
5. Misc
-
Perplexity AIâs Valuation and Enterprise Pro Launch: Perplexity AI hit a $1 billion valuation following a successful funding round, as reported by Bloomberg. They launched Enterprise Pro, a $40/month offering with enhanced data privacy and management features, already used by companies like Stripe, Zoom, and Databricks. Discussions touched on data usage concerns and iOS app issues amidst anticipation for the April 23rd announcement.
-
Hugging Face Downtime Disrupts Model Access: Many channels reported 504 Gateway Time-outs and service disruptions while trying to use Hugging Face, impacting functionalities like model search and download in tools like LM Studio. Speculation pointed to possible term-blocking by Hugging Face to manage traffic, with a long-term fix to eliminate the dependency in the works.
-
Phi-3 and Llama 3 Models Generate Buzz: The AI community actively discussed the newly released Phi-3 and Llama 3 models. Phi-3 garnered attention for its efficiency and performance on benchmarks like MMLU, despite skepticism about overfitting. Llama 3 saw experimentation with different variants and quantizations, alongside challenges with the tokenizer and context size. The modelsâ potential for fine-tuning and integration with various tools was a hot topic.
-
Retrieval-Augmented Generation (RAG) Gains Traction: Conversations delved into evaluating and enhancing RAG systems, from using LlamaIndex for finance bots to introducing frameworks like DREAM for distributed experimentation. Techniques such as superposition prompting, credibility-aware generation, and function-calling RAG were discussed, alongside the creation of RAG benchmarks that synthesize information from multiple documents.
Let me know if you would like me to elaborate on any part of the summary or if you have additional questions!
PART 1: High level Discord summaries
Unsloth AI (Daniel Han) Discord
-
LLaMA Leaps with Unslothâs Support: The LLaMa 3 Instruct Model sees advancements with a Hugging Face upload promising speed and memory improvements. Meanwhile, members share success in fine-tuning this model using Unsloth with a single 24GB GPU at BF16, maintaining quality within limited VRAM constraints.
-
AI Ergonomics Isnât Just about Code: Discussing the physical aspects of deep work, engineers exchanged ergonomic setup tips, signaling the value of standing desks and specialized keyboards like the Advantage2 in maintaining productivity.
-
Multilingual Models Spotlight: Showcases included Swedish and Spanish adaptations of language models, such as the llama-3-instruct-bellman-8b-swe-preview and solobsd-llama3. The Ghost 7B Alpha model also made an appearance, with tools and documents found here.
-
Chatter about Phi-3 and Quantization: Excitement bubbles around Microsoftâs Phi-3 Mini 4K Instruct model with quantitative musings on 4-bit implementations. A community memberâs deployment of Phi-3 on Hugging Face is available here.
-
Finetuning Finesse and Framework Fixes: Conversations revolved around the optimization of model fine-tuning practices and the identification of tokenizer issues, alongside community members detailing strategies for embedding knowledge into LLMs for instructional use and aligning with Unslothâs methodology.
Perplexity AI Discord
Perplexity AI Hits $1 Billion Valuation: After a successful funding round, Perplexity AI has been valued at a whopping $1 billion, even appearing in Bloomberg articles, with potential collaborations hinted involving AI expert Yann LeCun. The enterprise version, dubbed Perplexity Enterprise Pro, boasts enhanced data privacy and management features, drawing attention from major companies.
New Product Launch Brings Expectations and App Woes: The launch of Perplexity AIâs Enterprise Pro for $40/month has stirred excitement and anticipation for possible upcoming features, although some frustration was voiced over technical difficulties with the iOS app on iPads. Despite the issues, the enthusiasm suggests high expectations from the current user base.
Data Privacy Takes Center Stage: In light of the Enterprise Pro introduction, users discussed data privacy concerns, prompting moderator references to official statements about user consent for data use in models. Separately, the sharing channel instructed users on compliances necessary to share Perplexity AIâs search threads.
Anticipation Grows for Perplexityâs High Valuation Fundraise: Community conversations buzzed about Perplexity AI seeking to raise $250 million at a $2.5 to $3 billion valuation, as members shared a TechCrunch article and a CNBC interview with CEO Aravind Srinivas, signifying rapid company growth and market interest.
API User Looks for Cutting-Edge Features: A request on the pplx-api channel highlighted a thirst for an API providing up-to-date web information, like GPT but with browsing capabilities; Perplexityâs sonar online models were recommended, found in their documentation, with additional advice on prompt enhancement for improved model performance.
Stability.ai (Stable Diffusion) Discord
- Forge WebUI Attracts New Users: A newcomer to Stable Diffusion is exploring Forge Webui as a starting interface, while the community debates on various alternatives for creating AI-generated images and assets, including game and sci-fi elements.
- CUDA Conundrums and Speedy Solutions: Technical discussions are focusing on troubleshooting issues like CUDA errors and prompts for improving generation speeds, with frustration expressed over missing nodes in ComfyUI and compatibility queries about models across platforms.
- AI Fantasies and Dream Generation: Some whimsical exchanges propose using AI to design perfect partners or ideal homes, showcasing the enthusiasm for AIâs potential in crafting highly personalized content.
- Stable Diffusion v3 Buzz: Thereâs a mixture of excitement and skepticism about Stable Diffusion version 3 as users await its release, discussing insider insights from the former CEO Emad and debating the softwareâs true openness.
- Community Swaps Technical Tips and Tricks: Ongoing conversations reveal a community keen on solving practical issues like system installations transfers across drives, as they collectively navigate the evolving landscape of Stable Diffusion and its applications.
Nous Research AI Discord
-
Tensor Parallel on the Vanguard: Engineers discussed the potential of tensor parallel implementation in Very Large Language Models (VLLMs), with an expectation for jamba support to potentially skyrocket performance. Concerns include the proper management of contexts within Claude 3 and Big-AGI to balance costs, with memGPT and SillyTavern SmartContext as cited approaches.
-
AIâs Groove in High-Definition: Members shared remastered music videos, including the Beastie Boys and deadmau5 & Kaskade, along with a humorously encoded latent version of CIFAR100, titled latent-CIFAR100. A need for larger image classification datasets was recognized after testing on a 4x4x4 latent dataset, and scholarly papers like this one were shared to enrich discussions on language models and symbolic representation.
-
Toolkit Triumphs and Benchmark Brinkmanship: DeepMindâs Penzai enters the scene, offering a JAX-based toolkit for neural network manipulation. Meanwhile, debates ensue on the validity of the LMSYS benchmark as noted in a skeptical Reddit post. Rubik.ai threw its hat into the ring, calling for beta testers for a research assistant utilizing Claude 3 Opus and GPT-4 Turbo.
-
Model Magnification and Downtime Debacles: The Phi-3-mini model was juxtaposed against LLaMA-3, and GPT-3.5, sparking debate over its quantization performance and anticipation for model weights. Hugging Faceâs hiccup, possibly linked to heavy LLaMA-3 use or the FineWeb dataset, was a topic, while QLoRA vs. LoRA fine-tuning approaches were compared for efficacy.
-
The Quest for Optimal LLM Utilization: Members shared woes and wins of navigating Deepspeed Zero 3, pondered single-GPU optimization versus NVLink, and sifted through guidance for Llama fine-tuning best practices. The community clearly values specific fine-tuning guides, with Hugging Faceâs blogs and Labonneâs GitHub recommended over generic Medium articles.
-
Vision Benchmark Unveiled: Attention turned to RealWorldQA, an xAI benchmark dataset designed for Grok-1.5-vision-preview, generating interest within the Obsidian community. The nature of the dataset was clarified as a benchmark, not a training set, as highlighted in an xAI blog post, though a yearning for training datasets remains.
-
Revealing RAG Revelations: The community examined Retrieval-Augmented Generation (RAG) through the lens of LLaMA index performance, superposition prompting methods detailed in this Superposition Prompting Paper, and other papers shared on enhancing RAG credibility. Function-calling RAG implementations were also spotlighted, featuring resources like Pamela Foxâs blog.
-
Simulating Worlds Beyond Imagination: While WorldSim was offline, alternative simulations such as Super WorldSim and Snow World Simulator found a home in HuggingChat. Collaborative world-building efforts are thriving on Discord, with a focus on open models like Llama 3âs upcoming releases to enrich the simulated experience.
LM Studio Discord
-
GPU Gaffes and Glitches: Discussions around LM Studioâs performance on AMD and Nvidia GPUs uncovered that GPU offloading is essential to avoid 100% CPU utilization and prevent system inefficiency. Solutions for âError loading modelâ issues focused on turning off GPU offloading or setting specific environment variables to direct LM Studio to use dedicated GPUs.
-
Hugging Face Hiccups: Users encountered 503 and 500 error messages due to Hugging Face API downtime, affecting LM Studioâs ability to search and download models. While the community speculated on potential term-blocking by Hugging Face to alleviate traffic, ongoing communication through LM Studio Tweets keeps everyone updated.
-
Model Mania: A variety of AI models sparked debate, with discussions on Meta-Llama-3-8B-Instruct-GGUFâs infinite generation issue, finetuning Llama 3 versus Goliath 120B and Mistral, and Phi-3âs surprising efficiency. Queries about integrating tools like Autogen with LM Studio and concerns over model restrictions in content generation highlighted usersâ desire for customization.
-
Prompt Puzzles and Config Curiosities: LM Studio users shared tips on crafting system prompts for D&D scenarios, addressed Llama-3-Smaug-8B prompt concerns, and recommended preset configurations. Meanwhile, an Autogen snag involving a 2-token limit issue prompted advice for troubleshooting from the community.
-
Tech Trials and ROCm Reviews: AMD GPUs using ROCm sparked reviews of Meta-Llama-3âs performance, with noted speeds and questions about running large models on lower-end hardware. Resourcefulness reigned with strategies on resolving AMD GPU selection in LM Studio, and Hugging Face repository details were shared for leveraging Meta Llama 3 models effectively.
CUDA MODE Discord
-
X11 Steps Up for Remote GPU Profiling: The CUDA guild explored X11 forwarding to operate Nsight Compute GUI via SSH, with a user sharing a tutorial for setting up Nsight Compute remotely. Meanwhile, âEffortâ algorithm adds dynamism to LLM inference computations and piques interest for use with Triton or CUDA, with its code available on GitHub.
-
CUDA Matrix Magic and Thread Sync Discussions: In the CUDA channel, users clarified concepts like CUDA matrix multiplication and the behavior of
__syncthreads()
in CUDA; notably highlighting architectural changes starting with Volta. Inline functions were demystified with discussions around__forceinline
and__inline
. -
Triton Tackling Transforms & Memory Management: Triton users faced challenges with image grayscaling and memory fragmentation, while others debated binary search implementation strategies due to current limitations. The
make_block_ptr
parameterâs order caused confusion, steering the conversation to row-major versus column-major formats. -
PyTorch Practices: In the Torch channel, the guild confirmed that operations like
torch.nn.conv2d
,torch.nn.relu
, andtorch.nn.batchnorm
are executed on the GPU without CPU-GPU transfers for intermediate results. GPU operations scheduling is noted to be asynchronous. -
Optimizing with CUTLASS: A heads-up for Lecture 15 on CUTLASS revved the engines of keen learners, promising deeper dives into CUDAâs cutting-edge tools and techniques.
-
Algorithms, Beginnings, Book Clubs, and Beyond: Sparse discussions touched on a CUDA algorithm example, beginnersâ journey to mastering CUDA with entertaining styles, PMPP book chapter exercises, potential YouTube recording uploads, and mentions of JAX memory issues in implementing a denseformer. The hqq channel discussed significant Triton kernel benchmarks with a push toward efficient quantization strategies.
-
Kernels, Coarsening, and Collaboration in the Engine Room: The llmdotc channel was ablaze with intense talks on atomic operation removal, BF16/FP16 mixed precision gains, demands for current CUDA versions, and coalescing insights to double GELU and AdamW kernel performances. Thread coarsening shone as a beacon of hope for optimizing memory-throttled kernels.
-
Moderation, Technical Setups, and FlashAttention: Moderators donned their capes to manage content, while the massively-parallel-crew channel buzzed with plans to smooth out event recordings and future talks preparation, including a shout-out for a deep-dive on FlashAttention.
-
Local GPU Enthusiasts Convene: In a lighter moment, the off-topic channel revealed a pleasant meetup of members living in the vicinity of MĂŒnster, celebrated as a hub for CUDA enthusiasts.
-
Ring Attention Gains Attention: The ring-attention channel piqued curiosity through a brief mention of manual placement triumphs and tinyllama tests shared via an Axolotl GitHub link.
Eleuther Discord
Local LLMs on Smartphone Horizon: Discussions explored the feasibility of running large language models (LLMs) on smartphones, considering memory bandwidth (up to 51.2 GB/s) and GPU capabilities (Exynos 2400 chipset specs), suggesting even 7-8B models might be workable. Community members examined existing apps like MLC-LLM and discussed how Hugging Faceâs downtime raises questions about free AI model hosting sustainability.
SpaceByte Makes Tokenization Obsolete: A new byte-level LLM architecture, SpaceByte, promises to eliminate the need for tokenization, addressing potential information leakage from tokenizers. Other discussions critiqued Finewebâs relation to LLaMA and the novel application of ProGen2 for AI-designed CRISPR-Cas proteins, showcasing LLMsâ role in accelerating scientific discovery.
Scale Wisely with Tactful Debates: A clash over data rounding in a publication sparked wider conversation about constructive criticism and tone in technical debates. The skirmish illuminated misunderstandings around attributions of rounding data to the Chinchilla paper versus the replication team, unraveling deeper issues in replication methodologies.
RWKV Integration Ramps Up: GPT-NeoX developers are busy implementing RWKV (Rethinking Weighted Key-Value Memory Networks) with support for fp16 and JIT kernel compilation. Progress and tasks are detailed in GitHub Issue #1167, and developers are pushing for a version numbering system to streamline the iteration process.
AI Designs High-Performance Proteins: Profluent Bio successfully employed LLM ProGen2 to design new CRISPR-Cas protein sequences, yielding variants with increased specificity. The accomplishment demonstrates LLMsâ expanding utility in biotechnology sectors.
HuggingFace Discord
Chatting with PDFs, Now with Math!: ai_pdf is an open-source project enabling conversations with PDF documents, excelling with math PDFs by converting them to LaTeX.
Voice Directed AI Artistry: A 2.5-minute video generated in real-time from voice commands has been shared on Reddit, pointing towards a future of AI-driven dynamic video creation.
AI Gets Reasonable: Transformers.js allows running HuggingFace Transformers directly in the browser, expanding the playfield for AI applications in web environments.
Rust Helps Minify BPE: minbpe-rs
is a Rust port of minbpe
with functions for tokenization and training, improving performance for NLP tasks. The project is available on GitHub.
Diffusion Dilemmas and AI Video Debates: Users discuss the feasibility of creating a 1-minute video on âAI Horseâ using Diffusion, and others tackle various implementation challenges, demonstrating the teething issues of burgeoning AI applications.
Modular (Mojo đ„) Discord
Code Instructions Boost Hermes: After integrating code instruction examples, Hermes 2.5 has been observed to outperform Hermes 2 in various benchmarks, with notable improvements in metrics such as the MMLU benchmark score.
Mistralâs Capacity Challenge: Discussions concluded that Mistral cannot be scaled beyond 8k without ongoing pretraining. Focus shifts to enhancements in model merging strategies, such as applying differences between UltraChat and base Mistral to Mistral-Yarn.
Empathy in AI: The Open Empathic project seeks assistance in expanding categories; contributors are guided by a YouTube tutorial and encouraged to leverage movie scenes from YouTube for diversity in empathic response training.
Mojo Delights in Differences: Clarifications were made on Mojo around parameters and arguments with the latter being runtime values, while parameters in the language remain compile-time constants. Complex patterns like âType Stateâ are being explored, and performance comparison to Python reveals ongoing efficiency issues, notably in IO operations.
In the Trenches with Mojo SIMD and Multithreading: Implementing SIMD patterns in Mojo yielded close performance to Rust in a CPU-limited context. However, optimization challenges exist, such as the best practices for parallelize
. In other discussions, the use of UnsafePointer
and the phasing out of LegacyPointer
indicate a maturation of memory handling within the language.
OpenAccess AI Collective (axolotl) Discord
-
BOS Token Bug Squashed: Engineers examined an issue with LLaMa 3 not adding BOS tokens correctly during fine-tuning; a solution was discovered via a Pull Request that modifies
tokenizer.json
. -
Phi-3 Models Outpunch Their Weight: Despite their smaller size (around 3.8b parameters), Phi-3 models are showing comparable performance to larger counterparts, indicating a high efficiency. They come with an open MIT license, yet might prioritize reasoning abilities over extensive knowledge.
-
GPU Demands for Training AI Under the Lens: The discussion spotlighted the immense resources needed for AI model training, mentioning a specific setup with 512 Nvidia H100-80G GPUs running for a week, magnifying the computational intensity of such tasks.
-
LLaMaâs Extended Reach is No Joke: A member showcased Llama 3, a model that boasts a 16K token length, sparking excitement for its enhanced capacity for processing longer sequences.
-
The Roadblocks and Workarounds of AI Development: Conversations surfaced issues with Discord link sharing, problematic 8-bit optimizer configurations, and a lengthy 1.5-hour model merging process; there were also shared efforts for guidance on using Unsloth with Axolotl for optimized training.
-
Dataset Mastery and Markdown Mysteries: Participants shared how specifying
"type: sharegpt"
in YAML affects dataset operations and sought documentation on different dataset formats provided by Axolotl. Concerns about GitHubâs rendering of qmd files over traditional Markdown were also voiced.
OpenRouter (Alex Atallah) Discord
-
Optimizer on the Move: Performance issues with Wizard 8x22b due to heavy traffic are being mitigated by optimizing the load balancer, which should lessen latencies.
-
Routing Towards Efficiency: Following the deletion of Databricks: DBRX 132B Instruct (nitro), traffic will be rerouted to the main Databricks DBRX 132B Instruct model, and OpenRouter announced the introduction of three new models, including LLama 3 finetune, with updates to prompt formatting and solutions to regional network hiccups focusing on dynamic routing enhancements.
-
Mitigating Model Mishaps: Sporadic performances of WizardLM-2 have been flagged by users, with SillyTavernâs Assistant Prefill complicating interactions with LLaMA 3 models, and a hotfix has been issued for Hugging Faceâs tokenizer service downtime, with a long-term resolution in the works.
-
Financial Viability in AI Model Provision: Thereâs a lively debate about the financials of providing AI services, particularly the affordability of rates and the cost differentials compared to image generation models. Discussions span FP8 quantization, active worker discounts, and the economic footprint of Groqâs hardware.
-
Enhancing Contract Interaction: Suggestions in the #app-showcase channel include urging users towards contract standard awareness, implementing localization for legal relevance, and incorporating a feature for illegal terms detection, as well as the introduction of Keywords AI and DeepGaze, both leveraging OpenRouter.
OpenAI Discord
-
Robo Creep Factor: Engineers engaged in debate over the Atlas robotâs release, with anticipation for its market capabilities and underlying strategies, while grappling with its unsettling âcreepinessâ that sparks social media.
-
AI Divinity Discourse: A vigorous discussion unfolded about the possibility and implications of AI spirituality, including reflections on AI consciousness, tempered by community rules on secular discourse.
-
API Crafting and Interface Upgrades: Conversations around MyGPT and other tools like MetaGPT and Devika delved into their potential to craft APIs and improve app development, with interest in automated GitHub interactions.
-
Model Performance Mixed Bag: LLaMa 3 elicited mixed reactions on performance among the engineers, with skepticism cast on rumored GPT-5 release dates. Additionally, there was a call for high-quality literature on generative AI, citing both OpenAIâs published papers and repositories such as Arxiv.
-
Prompt Engineering Nuanced Discussion: Engineers exchanged strategies on the art of prompt optimization, debating the merits of brief custom instructions and discussing the ethical side of sharing techniques. The conversation also encompassed email improvement through GPT-4 and the absence of a comprehensive prompt library.
LAION Discord
-
Multimodal Model Frets Over Fitting: Existing multimodal datasets, which total around 2 million pairs, risk causing overfitting in models such as GPT-4v, particularly with LAION-COCO captions, where models show a worrying trend of memorization rather than learning.
-
Innovations and Concerns in Image Handling and Surveillance: The release of Adobe Firefly Image 3 has sparked interest due to its improved image generation and integration with Photoshop. Meanwhile, concerns about AI-driven surveillance bots on Discord were addressed with the introduction of kickthespy.pet, which uses an API to detect such bots.
-
The Next Wave in Visual Perception & Upscaling: Blink, a benchmark for multimodal LLMs like GPT-4V and Gemini, has arrived, challenging models with tasks requiring visual perception capabilities. In image handling, both Piecewise-Rectified Flow (PeRFlow) and HiDiffusion are making strides; however, HiDiffusionâs artifact issue in high-resolution images remains a point of concern (Read more about Blink).
-
Pushing the Multimodal Envelope: The conversation around multimodal models continued, with a new architecture, Mixture-of-Attention (MoA), being introduced, promising enhanced disentanglement in personalized image generation (described in this paper). The SEED-X multimodal foundation model also generated buzz with its ability to handle images of variable sizes, focusing on comprehensive understanding and generation.
-
Collaboration Call in Code: An open call for collaboration to build an NLP coding assistant targeting JavaScript/Rust frameworks caught traction in the guild, with softmax_function showing occasional support despite a tight schedule across multiple projects.
LlamaIndex Discord
DREAM Big with Distributed RAG: LlamaIndex introduces DREAM, a Distributed RAG experimentation framework, while also launching various RAG enhancements like ColBERT with a Twist and LoRA Fine-Tuning. Dig into the discussions about CRAG, an innovative layer improving RAG retrieval, and open-source rerankers in LlamaIndex tweets.
Using AI Models Beyond OpenAI: Within #general, users tackle different retrieval methods for LLMs, while addressing integration bugs and API key annoyances. Thereâs a spotlight on techniques for improved context management and interest in using alternatives to OpenAIâs options, as detailed in numerous LlamaIndex docs.
From LinkedIn to Google Sheets, AI Funding Data Draws Interest: A member shares an Infini Attention explainer on LinkedIn, while AI funding distribution by city is accessible on Google Sheets. New LLM-Ready Markdown integrations excite the community, and WhyHow.AIâs boosted Knowledge Graph SDK invites beta testers on Medium.
Database Debates and Fine-tuning: Members in #ai-discussion actively debate database types optimal for LLM training. They underscore the importance of understanding database schema and vector store possibilities when training large language models.
OpenInterpreter Discord
Caught a Case of the Compatibility Blues: Members noted that Open Interpreter, despite successful implementations, encountered challenges with Windows and mix-ups regarding model support, specifically clarifying that OI currently only supports OpenAI for the cloud option, not Groq or the Llama 3 70b model. They also discussed stability issues with the Llama 3 70b compared to its 8b counterpart.
Say What, Interpreter?: Various functionalities and integration challenges with Open Interpreter were highlighted, such as installation issues on Windows systems and pytesseract errors, the latter mitigated by using pip install --upgrade litellm
. Detailed troubleshooting videos, e.g., on YouTube for integrating OI with GROQ API, show community eagerness for cost-effective solutions.
Screen Vision, but No Prophecy: In the AI vision domain, it was clarified that Open Interpreter leverages the GPT-4-vision-preview for screenshot recognition tasks, indicating a mix of text and vision capabilities within the tool.
Helping Hands and Config Stands: The community celebrated reaching 100 GitHub contributors for Open Interpreter and displayed a strong collaboration spirit. Thereâs a push for sharing default configuration files, as seen in a pull request, to improve interactions with various models.
M1 Mac Spacebar Conspiracy: Specifically, for M1 Mac users, troubleshooting a recording issue where pressing the spacebar didnât work as intended, diverse solutions were proposed, including installing ffmpeg, checking microphone permissions or switching Python versions using conda.
Cloudy with a Chance of Compatibility: Thereâs a desire among members to see OI aligned with cloud services, with calls to enable compatibility for broader cloud platform support, including but not limited to platforms like brev.dev and Scaleway.
Interconnects (Nathan Lambert) Discord
Clickbait vs. Substance: The debate over AGI article titles in the community reflects a push for engaging yet truthful headlines. The discord in opinions, varying from AGIâs ontological status to being a faith, indicates a search for thought-provoking yet honest discourse, as illustrated by titles like âAGI Isnât Realâ and Mistral CEO Arthur Menschâs interview in Business Insider.
Phi-3 Under the Microscope: There is skepticism around the integrity of the Phi-3 benchmarks due to perceived overfitting on benchmarks like the MMLU, calling into question their relevance for OOD performance. Criticism also extends to the modelâs evaluation presentation and undisclosed data pipelines, amidst excitement for Phi-3âs anticipated MIT license release and multilingual capabilities.
Benchmarking Evals: The utility of AI model evaluations is scrutinized, noting the trade-offs between automated benchmarking tools like MMLU, BIGBench, and human-intensive evaluations like ChatBotArena. Perplexity-based evaluations, like AI2âs Paloma, were confirmed to be more for internal training checkpoints rather than public competitions.
Discord Community Dynamics: Anecdotes about the community include a researcherâs ephemeral tweeting habits, the surprising low membership despite free subscription, and candid aspirations of engaging with industry figures like Ross Taylor post NDA-laden periods.
A Tangle of Instruction and CRINGE: The ecosystem of instruction tuning is expounded with references to an introductory blog and appreciation for the classification in the MT Bench paper. Additionally, the CRINGE paperâs novel training approach using negative examples gains attention and is further discussed in relation to instruction tuning.
Cohere Discord
-
Project Spotlight: An open-source matchmaking application was announced, integrating @cohere Command R+, @stanfordnlp DSPy, @weaviate_io Vector store, and @crewAIInc agents. Its GitHub link was shared for community feedback.
-
AI-Enhanced Job Search Tactics: Engineers discussed that personal projects and having big company names on resumes often supersede actual work experience for securing job interviews.
-
Refining AI with Context: Engineers broached constraining AI responses to a given topic using preambles and BOS/EOS tokens to ensure outputs remain within the intended training scope.
-
Web Scraping Headaches: Development of a generic web scraper leveraging gpt-4-turbo for identifying (selector, column) pairs was debated, with the complexity of model interaction with web elements proving challenging.
-
Cohere Enthusiasts Seek Expansion: The engineering community showed strong interest in integrating Cohere Command-r with URL Grounding (RAG) into BotPress, hinting at a potential user shift from ChatGPT to Cohere if successfully implemented.
LangChain AI Discord
Webpage Wizardry with LLM Scraper: The newly unveiled LLM Scraper on GitHub presents a method to transform any webpage into structured data, leveraging LLMâs parsing capabilities, and cacheing previous replies to subsequent requests.
Stock Analysis at Your Fingertips: AllMind AI, an AI tool that promises speedy and economical financial insights, is gunning for the top spot on Product Hunt.
Automated Graphs Get Smarter: WhyHow.AI has rolled out a major upgrade with schema-controlled automated knowledge graphs, aiming to structure user-uploaded content more efficiently. The new feature and its beta program were introduced on a Medium post.
Conversational Query Crafting: A blog post breaks down how the Self-querying retriever creates structured queries from natural language inputs, enhancing semantic similarity searches with filtering based on metadata.
Watermark Warnings for LLMs: The community delved into the concept of watermarking in AI-generated texts, a technique for planting identifiable patterns, as detailed on this resource page: Watermarking LLMs.
tinygrad (George Hotz) Discord
TinyGrad Tackles Segfaults and Training Woes: Discussions highlighted challenges with setting up tinygrad post-ROCm 6.1 release due to segfaults, while George Hotz assured that the master
branch is stable thanks to robust CI.
AI Hardware Hyped to Outperform Cloud: The community debated the merits of decentralized AI services like TinyBox against traditional cloud services, focusing on points such as censor resistance, local training feasibility, and the importance of real-time user data training.
Inside TinyGradâs Mechanics: In the realm of tinygrad, members dove into deep discussions about stacking tensors, shape tracking, and memory management, exchanging tutorials and documentation that reveal the innards of the minimalist deep learning library.
Windows Walks a Tightrope with CUDA: Windows users shared their experiences and workarounds for running tinygrad with CUDA, using tools like WSL and Docker, while acknowledging the platformâs official unsupported status for this setup.
George Hotz Chronicles Upcoming Tinygrad Evolutions: In a weekly roundup, Hotz mentioned focus areas for upcoming discussions, highlighting mlperf progress, potential NVIDIA CI strategies, and the goal of keeping the tinygrad codebase succinct.
ShapeTracker Tutorial, Uops Documentation, and CUDA Tensor Core Guide were shared as educational resources, while Meta AI was cited in the discussion.
DiscoResearch Discord
Mixtral Edges Out Llama3: Mixtral-8x7B-Instruct-v0.1 demonstrated superior performance to Llama3 70b instruct in a German RAG evaluation, according to shared dataset results. However, members noted potential issues with the evaluation metrics, especially the âquestion to contextâ metric, and suggested a possible formatting bug in the query template which might impact results.
Enhancing Chatbots with Execution Models and Haystack: Armifer91 is prototyping an âexecute_modelâ function for chatbots, grouping certain functionalities and paralleling the MoE approach, while a GitHub notebook illustrates using the Haystack LLM framework for dynamically invoking services. Developers are exploring improvement techniques for Llama related to tokenization for fine-tuning, despite facing platform instability complaints with Hugging Face.
Whispers of German Speech Recognition: Members are trialing various Whisper models for German speech recognition such as whisper-tiny-german and whisper-base-quant-ct2, with a consensus on potential finetuning or quantization for enhanced functionality on smartphones.
Template Troubles and Tokenization Tangles: Complexities related to templates and tokenizer configurations in Llama-3 models were prevalent in discussions, with talk on zero weights for special tokens and alternative eos_tokens in conversational contexts. The ChatML template is standard, yet there are tokenizer-related challenges.
DiscoLMâs German Precision Problem: Fine-tuning DiscoLM for German language applications prompted debates over the modelâs tokenization issues and potential strategies for improvement, with Instruct model serving as a possible foundation. Suggestions were made to follow the LeoLM training approach and connect with the occiglot team to bolster Llama3âs performance in German.
Latent Space Discord
Expanding the LLM Horizon: Engineers debated the prospect of using rope to expand large language modelsâ context window, showing enthusiasm and referencing a Perplexity AI article for in-depth understanding.
FineWeb Stirs Excitement: The announcement of FineWeb, a massive web data trove of 15 trillion tokens drew attention, with expectations high due to its superior performance markers over predecessors like RefinedWeb and C4, as disclosed on Twitter.
Frameworks in Focus: Discordants shared mixed feelings about the Hydra framework, with some appreciating its sophisticated application configuration capabilities, while others pondered over its distinctions; interest peaked with references to Hydraâs GitHub repository.
Microsoftâs Mighty Phi-3 Emerges: Phi-3 sparked interest with its releaseâoperating at a grander scale than its predecessor, Phi-2, and speculated to compete with notable models like llama 3 8B; speculations fueled by insights shared through a Tweet on Phi-3âs capabilities.
Perplexity.ai Makes a Financial Leap: The technical crowd took note of Perplexity.aiâs successful fundraising round, touted to enhance its search engine prowessâannouncement revealed in a Tweet detailing the $62.7M fundraise.
Mozilla AI Discord
- 70b Beats 8b in Llamafile Matchup: Users indicated that the llama 3 70b is the go-to choice over 8b for integration with Llamafile, citing inoperability issues with the latter and highlighting that the 70b Q2 weights are a manageable 26GB in size.
- Mixed Results with M1 Pro Quantization: An issue was reported where the Q2 variant of the llama model gave scrambled output on the M1 Pro system; however, it was clarified that the model runs smoothly in CPU mode, although at a slower pace.
- Androidâs Address Space Limitation Stumps Llamafile: Discussion around running llamafile on Android was thwarted by the limitation that Android lacks a 47 bit address space, making support for it currently unattainable.
- Redis Pioneer Praises Llamafile: The inventor of Redis expressed approval for the llama3 70b version of Llamafile on Twitter, a commendation that received celebration from the Llamafile community.
- Port Prowess for Multimodal Models: Inquiries about operating multiple instances of llamafile led to advice on employing the
--port
flag to specify different ports for concurrent model runs.
Skunkworks AI Discord
-
Surprise in Context Size: A revelation from 4chan highlighted that a certain AI might have been operating with a 32k context size throughout, challenging previous assumptions about its capabilities.
-
Alternate Methods to Model Scaling: A member brought up Alpinâs non-traditional approach to scaling AI models, highlighting strategies like dynamic ntk and linear scaling, which could potentially maintain effectiveness without requiring ârope.â
-
Matt Rolls Out 16k Config for Llama: Posted on Hugging Face was Mattâs 16k configuration for the Llama model, including parameters such as âmax_position_embeddingsâ: 16000, and the model type specified as âllamaâ. Configuration details available here.
-
Medical Knowledge Made Accessible: Engaging discussions focused on simplifying medical knowledge; suggestions ranged from fine-tuning an LLM for simplicity, to developing an agentic system that decomposes tasks into specialized stages, eventually translating medical summaries into laymanâs terms.
-
OCR Data Hunt for Lesser-Known Languages: A request was made for an OCR dataset supporting less-popular languages, preferably containing document-type data, indicating ongoing efforts to increase AIâs linguistic reach and accessibility.
LLM Perf Enthusiasts AI Discord
-
Meta AIâs âImagineâ Grips Engineer Interest: Meta AIâs âImagineâ has sparked excitement among guild members, with one calling it insane and prompting requests for specific examples that showcase its capabilities.
-
Finding the Right Dev Tools: Members are actively looking for tried-and-true development tools suitable for work with Large Language Models (LLMs), signifying a keen interest in optimizing their workflows.
-
Azure OpenAI Service Stutters: Users are expressing frustration with Azure OpenAI, reporting significant latency with requests sometimes taking upwards of 20 minutes, and encountering rate-limiting issues when making more than two requests within a 15-second window.
-
Identifying the Azure Lag Source: Some suspect that Azureâs latency issues may be due to temporary service problems, rather than being a consistent issue with the platform.
-
Real-Time API Response Tracking Tool Shared: A practical resource, GPT for Workâs response time tracker, was shared to monitor API response times of major LLMs, which could be instrumental for engineers in search of performance optimizations.
Datasette - LLM (@SimonW) Discord
-
A New Challenger Approaches in AI: Llama 3 has claimed the joint 5th place on the LMSYS arena leaderboard, rubbing shoulders with top models like Claude 3 Opus and GPT-4 variants, and can run on high-end laptops.
-
SimonWâs Toolkit for Llama 3: Simon Willison has launched LLM, a toolset complete with a command-line interface and a Python library, designed to streamline using Llama 3 and other models. Detailed usage instructions can be found in his blog post here.
-
AI Checks Architectural Homework: AI has carved a niche in architecture, functioning as a âpreflightâ tool to spot potential issues and code violations in architectural designs, though it hasnât progressed to creating blueprints yet.
-
Blueprint Interpretation Still at Ground Floor: Conversations are circling around employing AI to interpret architectural blueprints, specifically for tracing ductwork in PDF formats, but no concrete solutions were tabled.
-
Hackernews Digest Desideratum: An inquiry was made about a bash script to generate summaries of Hackernews but details of the latest version were not mentioned in the discussion.
AI21 Labs (Jamba) Discord
- Spam Crusaders Needed: The general-chat was bombarded with spam messages linking to an unauthorized Discord invite with NSFW content.
- Jamba Compatibility Queries: A memberâs curiosity was piqued regarding whether Jamba is compatible with LM Studio and sought details on its operational requisites, akin to Claudeâs memory footprint.
- Jambaâs Memory Appetite: Discussions unfolded around the challenges of running Jamba, particularly its hefty RAM requirements, noting that even Google Colab fell short in providing necessary resources, and efforts on Google Cloud were also fruitless.
- Spam Link Blunder: An untoward spam link promising NSFW content was distributed in the channel but should be disregarded and reported by vigilant members.
The Alignment Lab AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
PART 2: Detailed by-Channel summaries and links
Unsloth AI (Daniel Han) â· #general (1118 messagesđ„đ„đ„):
- Unsloth Supports Phi-3 Mini: Unsloth announces their support for Microsoftâs Phi-3 Mini 4K Instruct model and has uploaded a 4bit version on Hugging Face, aiming to integrate it into the Unsloth library despite some required alterations due to architectural differences from Llama 3. Their blog post about Llama 3 has been updated with this information and they are waiting to support the 14B variant when released.
- Successful Fine-Tuning on 24GB VRAM: A user reported success in fine-tuning Llama 3 using Unsloth on a 1x3090 24GB GPU with pure BF16 quality, effectively handling the memory demands and using only 16GB of VRAM.
- Ergonomic Workstation Discussions: Members shared experiences and recommendations on ergonomic workstation setups, highlighting keyboards, monitors, chairs, and the benefits of standing desks for a comfortable working environment.
- Technical Blog Post Tips: Following feedback on previous blog posts, Unslothâs upcoming posts will include more benchmarks and descriptive texts within images to provide clearer context and information.
- Phi-3 Analysis and Anticipation: There is ongoing anticipation and discussion among users regarding the newly released Phi-3 models, with curiosity about further claims and applications. Some users contemplate finetuning these models and are eagerly awaiting compatibility with existing libraries.
Links mentioned:
- Practical Deep Learning for Coders - Practical Deep Learning: A free course designed for people with some coding experience, who want to learn how to apply deep learning and machine learning to practical problems.
- Microsoft launches Phi-3, its smallest AI model yet: Phi-3 is the first of three small Phi models this year.
- chargoddard/llama3-42b-v0 · Hugging Face: no description found
- unsloth/Phi-3-mini-4k-instruct-bnb-4bit · Hugging Face: no description found
- Watching The Cosmos GIF - Cosmos Carl Sagan - Discover & Share GIFs: Click to view the GIF
- microsoft/Phi-3-mini-128k-instruct · Hugging Face: no description found
- BarraHome/llama-3-orpo-v1 · Hugging Face: no description found
- Blog: no description found
- Nvidia bans using translation layers for CUDA software â previously the prohibition was only listed in the online EULA, now included in installed files [Updated]: Translators in the crosshairs.
- Finetune Llama 3 with Unsloth: Fine-tune Meta's new model Llama 3 easily with 6x longer context lengths via Unsloth!
- Tweet from Daniel Han (@danielhanchen): Phi-3 Mini 3.8b Instruct is out!! 68.8 MMLU vs Llama-3 8b Instruct's 66.0 MMLU (Phi team's own evals) The long context 128K model is also out at https://huggingface.co/microsoft/Phi-3-mini-12...
- Advantage2 ergonomic keyboard by Kinesis: Contoured design, mechanical switches, fully programmable
- Direct Preference Optimization (DPO): Get the Dataset: https://huggingface.co/datasets/Trelis/hh-rlhf-dpoGet the DPO Script + Dataset: https://buy.stripe.com/cN2cNyg8t0zp2gobJoGet the full Advanc...
- Home: Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory - unslothai/unsloth
- Apple Acquires French AI Company Specializing in On-Device Processing: Apple has acquired the Paris-based artificial intelligence startup Datakalab amid its push to deliver on-device AI tools. Datakalab specializes in...
- Kaggle Llama-3 8b Unsloth notebook: Explore and run machine learning code with Kaggle Notebooks | Using data from No attached data sources
- Reddit - Dive into anything: no description found
- GitHub - tinygrad/tinygrad: You like pytorch? You like micrograd? You love tinygrad! â€ïž: You like pytorch? You like micrograd? You love tinygrad! â€ïž - GitHub - tinygrad/tinygrad: You like pytorch? You like micrograd? You love tinygrad! â€ïž
- Kevin The Office GIF - Kevin The Office Smirk - Discover & Share GIFs: Click to view the GIF
- Tweet from Aaron Ng (@localghost): llama 3 70b beamed to my phone from my M1 Max ~7.6 tok/s with mlx. your own little gpt-4 at home
- Blog: no description found
- generation_config.json · unsloth/llama-3-8b-Instruct-bnb-4bit at main: no description found
- Efficiently fine-tune Llama 3 with PyTorch FSDP and Q-Lora: Learn how to fine-tune Llama 3 70b with PyTorch FSDP and Q-Lora using Hugging Face TRL, Transformers, PEFT and Datasets.
- Unsloth update: Mistral support + more: Weâre excited to release QLoRA support for Mistral 7B, CodeLlama 34B, and all other models based on the Llama architecture! We added sliding window attention, preliminary Windows and DPO support, and ...
- GitHub - zenoverflow/datamaker-chatproxy: Proxy server that automatically stores messages exchanged between any OAI-compatible frontend and backend as a ShareGPT dataset to be used for training/finetuning.: Proxy server that automatically stores messages exchanged between any OAI-compatible frontend and backend as a ShareGPT dataset to be used for training/finetuning. - zenoverflow/datamaker-chatproxy
- unsloth (Unsloth): no description found
- GitHub - e-p-armstrong/augmentoolkit: Convert Compute And Books Into Instruct-Tuning Datasets: Convert Compute And Books Into Instruct-Tuning Datasets - e-p-armstrong/augmentoolkit
- GitHub - ml-explore/mlx-swift: Swift API for MLX: Swift API for MLX. Contribute to ml-explore/mlx-swift development by creating an account on GitHub.
- add P2P support · NVIDIA/open-gpu-kernel-modules@1f4613d: no description found
- iPad App · ggerganov/llama.cpp · Discussion #844: I've been playing with using llama to help me tell stories to my daughter at night. I wrote a simple native iPad app that uses llama.cpp, and provides some nice model / thread management capabilit...
- main : add Self-Extend support by ggerganov · Pull Request #4815 · ggerganov/llama.cpp: continuation of #4810 Adding support for context extension to main based on this work: https://arxiv.org/pdf/2401.01325.pdf Did some basic fact extraction tests with ~8k context and base LLaMA 7B v...
- Apple (AAPL) Growth Opportunities: Southeast Asia and Africa, Lower-EâŠ: no description found
Unsloth AI (Daniel Han) â· #random (167 messagesđ„đ„):
- New Llama AI Model Released: A Hugging Face model: Llama 3 70B INSTRUCT 4bit has been uploaded, promising finetuning Mistral, Gemma, and Llama up to 2-5 times faster with 70% less memory. Accompanying this is a Google Colab GPU notebook for Llama-3 8b.
- Upcoming Tutorial Materials: Community members discussed creating and sharing a guide or notebook to help with finetuning Instruct models with chat templates. It was suggested that materials including a video tutorial might be in the works.
- Struggling with Llama C++ Batch Processing: A user reports that using
--cont-batching
orcache_prompt
in llama.cpp for simultaneous prompt processing shows no performance gains, as sending prompts sequentially or concurrently takes the same amount of time. - Gemma Keyword Extraction Challenges: A discussion took place regarding the extraction of keyphrases from customer reviews with an LLM such as Gemma, and how it often results in too creative or inaccurate results, pushing users to consider other tools like KeyBERT.
- Unsloth Project Updates and Community Contributions: There is an anticipation for Unslothâs continued work on tutorials, blog posts, and a studio for Colab, with contributions from the community anticipated, including shared notebooks.
Links mentioned:
- Answer.AI - Efficient finetuning of Llama 3 with FSDP QDoRA: Weâre releasing FSDP QDoRA, a scalable and memory-efficient method to close the gap between parameter efficient finetuning and full finetuning.
- Q*: Like đ. Comment đŹ. Subscribe đ„.đ Discord: https://discord.gg/pPAFwndTJdhttps://github.com/hu-po/docsFrom r to Qâ: Your Language Model is Secretly a Q-Fun...
- GitHub - MaartenGr/KeyBERT: Minimal keyword extraction with BERT: Minimal keyword extraction with BERT. Contribute to MaartenGr/KeyBERT development by creating an account on GitHub.
- no title found: no description found
- unsloth/llama-3-70b-Instruct-bnb-4bit · Hugging Face: no description found
- CUDA MODE: A CUDA reading group and community https://discord.gg/cudamode Supplementary content here https://github.com/cuda-mode Created by Mark Saroufim and Andreas Köpf
- Discord - A New Way to Chat with Friends & Communities: Discord is the easiest way to communicate over voice, video, and text. Chat, hang out, and stay close with your friends and communities.
Unsloth AI (Daniel Han) â· #help (716 messagesđ„đ„đ„):
-
LLaMA Model Training Issues: Members discussed problems related to fine-tuning LLaMA models, where the output was repeating the same sentence or stopping prematurely. Solutions such as adjusting training configurations and verifying tokenizer settings were suggested. Additionally, users faced challenges when trying to upcast to FP16 and were guided to use specific commands for successful training and quantization.
-
Exploring Quantization and Unsloth Models: Users explored how quantization affects model quality and the resource requirements for running models on limited hardware. For practical applications, a guideline suggested considering around 4-bit quantization to maintain balance between performance and quality.
-
Setting Up and Importing to Unsloth: Challenges were mentioned regarding setting up the Unsloth environment and importing models, with particular issues around Python environment setups. Some users mentioned success with reinstalling packages or ensuring they had the latest version of Unsloth.
-
Using Inference with Finetuned Models: Users interacting with finetuned models noticed discrepancies in the modelsâ responses; for example, output being identical to the input prompt. Unsloth was reported to have recently fixed such tokenizer issues (e.g., defining stopping/eos tokens), which were impacting inference performance.
-
Exporting Models and Fine-tuning Strategies: Tips for exporting unsloth models to gguf/vLLM formats and merging LoRA adapters back to FP16 were shared. Users sought advice on best approaches for embedding knowledge into LLMs for instructional use, and in general guidance for the fine-tuning process was sought after by several community members.
Links mentioned:
- no title found: no description found
- config.json · Finnish-NLP/llama-3b-finnish-v2 at main: no description found
- imone (One): no description found
- unslo: GitHub is where unslo builds software.
- OrpoLlama-3-8B - a Hugging Face Space by mlabonne: no description found
- Home: Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory - unslothai/unsloth
- Google Colaboratory: no description found
- Tomeu Vizoso's Open Source NPU Driver Project Does Away with the Rockchip RK3588's Binary Blob: Anyone with a Rockchip RK3588 and a machine learning workload now has an alternative to the binary blob driver, thanks to Vizoso's efforts.
- save_pretrained_gguf method RuntimeError: Unsloth: Quantization failed .... · Issue #356 · unslothai/unsloth: /usr/local/lib/python3.10/dist-packages/unsloth/save.py in save_to_gguf(model_type, model_directory, quantization_method, first_conversion, _run_installer) 955 ) 956 else: --> 957 raise RuntimeErro...
- LLM Model VRAM Calculator - a Hugging Face Space by NyxKrage: no description found
- Full fine tuning vs (Q)LoRA: âĄïž Get Life-time Access to the complete scripts (and future improvements): https://trelis.com/advanced-fine-tuning-scripts/âĄïž Runpod one-click fine-tuning te...
- Mervin Praison: Mervin Praison
- Atom Real Steel GIF - Atom Real Steel Movie - Discover & Share GIFs: Click to view the GIF
- Love Actually Christmas GIF - Love Actually Christmas Christmas Movie - Discover & Share GIFs: Click to view the GIF
- Google Colaboratory: no description found
- Big Code Models Leaderboard - a Hugging Face Space by bigcode: no description found
- Reddit - Dive into anything: no description found
- Carson Wcth GIF - Carson WCTH Happens To The Best Of Us - Discover & Share GIFs: Click to view the GIF
- GitHub - unslothai/unsloth: Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory: Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory - unslothai/unsloth
- Index of /: no description found
- GGUF quantizations overview: GGUF quantizations overview. GitHub Gist: instantly share code, notes, and snippets.
- GitHub - unslothai/unsloth: Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory: Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory - unslothai/unsloth
- yahma/alpaca-cleaned · Datasets at Hugging Face: no description found
- Brat-and-snorkel/ann-coll.py at master · pidugusundeep/Brat-and-snorkel: Supporting files. Contribute to pidugusundeep/Brat-and-snorkel development by creating an account on GitHub.
- I got unsloth running in native windows. · Issue #210 · unslothai/unsloth: I got unsloth running in native windows, (no wsl). You need visual studio 2022 c++ compiler, triton, and deepspeed. I have a full tutorial on installing it, I would write it all here but Iâm on mob...
- GitHub - meta-llama/llama-recipes: Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama3 for WhatsApp & Messenger.: Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q...
- imone/Llama-3-8B-fixed-special-embedding · Hugging Face: no description found
- Trainer: no description found
- GitHub - sgl-project/sglang: SGLang is a structured generation language designed for large language models (LLMs). It makes your interaction with models faster and more controllable.: SGLang is a structured generation language designed for large language models (LLMs). It makes your interaction with models faster and more controllable. - sgl-project/sglang
- GitHub - hiyouga/LLaMA-Factory: Unify Efficient Fine-Tuning of 100+ LLMs: Unify Efficient Fine-Tuning of 100+ LLMs. Contribute to hiyouga/LLaMA-Factory development by creating an account on GitHub.
- Hugging Face status : no description found
Unsloth AI (Daniel Han) â· #showcase (76 messagesđ„đ„):
-
Swedish Language Model Progress: A showcase of the llama-3-instruct-bellman-8b-swe-preview model was provided, which has been trained for coherence and reasoning. Enthusiasm was expressed for the model that has been trained using Unsloth.
-
Introducing Ghost 7B Alpha: Release of Ghost 7B Alpha, optimizing reasoning and multitasking abilities, was announced with resources such as a model card, website documentation, and a demo.
-
Improvement Through Retraining: A member discussed retraining the Llama3 model using Unslothâs latest 4bit version, which led to successful results and a decision to continue experimenting with different hyperparameters.
-
Solobsd Unveils Spanish Language Model: A new Spanish language model (solobsd-llama3) was announced, based on data from the Alpaca dataset, with appreciation and inquiries about the specific variant of Spanish demonstrated.
-
Model Fine-Tuning Discussions: There was a technical exchange on how to effectively stop models during generation and how to work with dataset templates in context with Unsloth and Llama3. Advice and steps for successful training and conversion were shared among contributors.
Links mentioned:
- mahiatlinux/MasherAI-7B-v6.1 · Hugging Face: no description found
- SoloBSD/solobsd-llama3 · Hugging Face: no description found
- hikikomoriHaven/llama3-8b-hikikomori-v0.1 · Hugging Face: no description found
- Remek/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT · Hugging Face: no description found
- BarraHome/llama-3-orpo-v1-merged_16bit · Hugging Face: no description found
- Hi (Ho): no description found
- neph1/llama-3-instruct-bellman-8b-swe-preview · Hugging Face: no description found
- ghost-x/ghost-7b-alpha · Hugging Face: no description found
- Ghost 7B Alpha: The large generation of language models focuses on optimizing excellent reasoning, multi-task knowledge, and tools support.
- Playground with Ghost 7B Alpha: To make it easy for everyone to quickly experience the Ghost 7B Alpha model through platforms like Google Colab and Kaggle. We've made these notebooks available so you can get started right away.
- Support Llama 3 conversion by pcuenca · Pull Request #6745 · ggerganov/llama.cpp: The tokenizer is BPE.
Unsloth AI (Daniel Han) â· #suggestions (73 messagesđ„đ„):
-
Color Confusion Conundrum: A member expressed difficulties reading the welcome message due to a poor color scheme selectionâgreen background with gray text. The issue was resolved after the color was changed upon this feedback.
-
Workflow Woes in Google Colab: Members discussed the challenges faced when using Google Colab for CUDA and C++ development lacking debugging tools and syntax highlighting. The conversation spanned issues such as the messiness of print statements for debugging and slower productivity, with some suggesting the use of VSCode over SSH.
-
SSH and Colab Conundrum: Experiences with remote SSH access to Google Colab were shared, with a focus on workflow inefficiencies and the negatives of remote SSH not being a pleasant experience. A tutorial from Puget Systems was linked for setting up Jupyter Notebooks with SSH on Windows 10.
-
Philanthropic Pursuits for Unsloth Pro: The discussion explored Unsloth Proâs potential direction, suggesting the application for philanthropic grants and open-sourcing the code. However, it was mentioned that Unsloth has now secured funding and is building its platform.
-
Debating The Need for a Jobs Channel: Members debated the necessity and potential risks of adding a #jobs channel to the server. Concerns about scamming, channel clutter, and maintaining focus on Unsloth were raised, without reaching a consensus on the issue.
-
Vision for Vision - Model Compatibility Suggestion: Suggestions were made for future support of various models, including those for vision tasks possibly alongside the upcoming Llama-3 vision release. Additionally, curiosity arose regarding the instruction version of newly mentioned models like Phi-3.
Links mentioned:
- How To Run Remote Jupyter Notebooks with SSH on Windows 10: Being able to run Jupyter Notebooks on remote systems adds tremendously to the versatility of your workflow. In this post I will show a simple way to do this by taking advantage of some nifty features...
- microsoft/Phi-3-mini-128k-instruct · Hugging Face: no description found
- Lecture 14: Practitioners Guide to Triton: https://github.com/cuda-mode/lectures/tree/main/lecture%2014
Perplexity AI â· #announcements (1 messages):
-
Perplexity Enterprise Pro Launches: Perplexity introduces Enterprise Pro, a secure AI answer engine designed for businesses, featuring increased data privacy, SOC2 compliance, user management, and single sign-on. With heavyweights like Stripe, Zoom, and Databricks already leveraging its benefits, Databricks reports saving approximately 5000 hours a month.
-
Enterprise Proâs Impact and Pricing: Catering to diverse industries including software, finance, and sports, Enterprise Pro offers knowledge workers the ability to search for fast, reliable information securely, priced at $40/month or $400/year per seat. Interested companies can sign up at Perplexity Enterprise.
Perplexity AI â· #general (1005 messagesđ„đ„đ„):
- Perplexity Enterprise Pro Unleashed: A new, premium feature, Perplexity Enterprise Pro, has been announced via the official channel and on Bloomberg, offering added features like improved security and data protection measures for $40/month.
- Corporate Growth and Product Diversification: Perplexity.aiâs valuation has hit $1 billion following a successful funding round, signaling expansion and a broader service offering, including the teased potential involvement of AI luminary Yann LeCun.
- Privacy Concerns and Clarifications: User discussions raised concerns about data privacy and whether data from paid users were being used for training AI models; moderators linked to official statements implying data usage consents and options.
- iOS App Challenges: Users reported persistent issues with the Perplexity app on iPad, such as inability to search or sign-in, with support advising affected users to reach out via direct message for assistance.
- Potential Changes and Features in Projected Release: With speculative hints from moderators about imminent updates, users speculate about feature drops, removal of Opus limits, or other improvements, leading to eager anticipation for the April 23rd announcement.
Links mentioned:
- rabbit r1 - pickup party nyc live at 8PM ET: streaming from the r1 pickup party event in NYC
- Use Your Self-Hosted LLM Anywhere with Ollama Web UI: no description found
- Bloomberg - Are you a robot?: no description found
- đĄ Home | Open WebUI: Open WebUI is an extensible, feature-rich, and user-friendly self-hosted WebUI designed to operate entirely offline. It supports various LLM runners, including Ollama and OpenAI-compatible APIs.
- Bloomberg - Are you a robot?: no description found
- Apply to Y Combinator | Y Combinator: To apply for the Y Combinator program, submit an application form. We accept companies twice a year in two batches. The program includes dinners every Tuesday, office hours with YC partners and access...
- Yann LeCun - Wikipedia: no description found
- Superstore Amy Sosa GIF - Superstore Amy Sosa Im Just Guessing - Discover & Share GIFs: Click to view the GIF
- Money Mr GIF - Money Mr Krabs - Discover & Share GIFs: Click to view the GIF
- Andrej Karpathy: FAQ Q: How can I pay you? Do you have a Patreon or etc? A: As YouTube partner I do share in a small amount of the ad revenue on the videos, but I don't maintain any other extra payment channels. I...
- Think About It Use Your Brain GIF - Think About It Use Your Brain Use The Brain - Discover & Share GIFs: Click to view the GIF
- Morphic: A fully open-source AI-powered answer engine with a generative UI.
- Yt Youtube GIF - Yt Youtube Logo - Discover & Share GIFs: Click to view the GIF
- Heidi Klum Number Two GIF - Heidi Klum Number Two 2Fingers - Discover & Share GIFs: Click to view the GIF
- Tweet from Aravind Srinivas (@AravSrinivas): We have many Perplexity users who tell us that their companies don't let them use it at work due to data and security concerns, but they really want to. To address this, we're excited to be la...
- Tweet from Aravind Srinivas (@AravSrinivas): 4/23
- Morphic: A fully open-source AI-powered answer engine with a generative UI.
- GroqCloud: Experience the fastest inference in the world
- ChatGPT vs Notion AI: An In-Depth Comparison For Your AI Writing Needs: A comprehensive comparison between two AI tools, ChatGPT and Notion AI, including features, pricing and use cases.
- Tweet from Aravind Srinivas (@AravSrinivas): 8b is so good. Can create a lot more experiences with it. We have some ideas. Stay tuned! âïž Quoting MachDiamonds (@andromeda74356) @AravSrinivas Will you be switching the free perplexity version t...
- GitHub - mckaywrigley/clarity-ai: A simple Perplexity AI clone.: A simple Perplexity AI clone. Contribute to mckaywrigley/clarity-ai development by creating an account on GitHub.
- Lo Ășltimo de OpenAI llega a Copilot. El asistente de programaciĂłn evoluciona con un nuevo modelo de IA: En el Ășltimo año, la inteligencia artificial no solo ha estado detrĂĄs de generadores de imĂĄgenes como DALL·E y bots conversacionales como ChatGPT, tambiĂ©n ha...
- GitHub - developersdigest/llm-answer-engine: Build a Perplexity-Inspired Answer Engine Using Next.js, Groq, Mixtral, Langchain, OpenAI, Brave & Serper: Build a Perplexity-Inspired Answer Engine Using Next.js, Groq, Mixtral, Langchain, OpenAI, Brave & Serper - developersdigest/llm-answer-engine
- Eric Gundersen Talks About How Mapbox Uses AWS to Map Millions of Miles a Day: Learn more about how AWS can power your big data solution here - http://amzn.to/2grdTah.Mapbox is collecting 100 million miles of telemetry data every day us...
- Robot Depressed GIF - Robot Depressed Marvin - Discover & Share GIFs: Click to view the GIF
- AWS re:Invent 2023 - Customer Keynote Perplexity | AWS Events: Hear from Aravind Srinivas, cofounder and CEO of Perplexity, about how the conversational artificial intelligence (AI) company is reimagining search by provi...
- Rick Astley - Never Gonna Give You Up (Official Music Video): The official video for âNever Gonna Give You Upâ by Rick Astley. The new album 'Are We There Yet?' is out now: Download here: https://RickAstley.lnk.to/AreWe...
- AWS re:Invent 2023 - Customer Keynote Anthropic: In this AWS re:Invent 2023 fireside chat, Dario Amodei, CEO and cofounder of Anthropic, and Adam Selipsky, CEO of Amazon Web Services (AWS) discuss how Anthr...
- no title found: no description found
- GitHub - xx025/carrot: Free ChatGPT Site List èżćżäžșäœ ćć€äșäŒć€ć èŽčć„œçšçChatGPTéćç«çč: Free ChatGPT Site List èżćżäžșäœ ćć€äșäŒć€ć èŽčć„œçšçChatGPTéćç«çč. Contribute to xx025/carrot development by creating an account on GitHub.
Perplexity AI â· #sharing (29 messagesđ„):
- Perplexity AI Searches Shared: Members of the Sharing channel shared various links to Perplexity AI searches ranging from topics like positive parenting to instructions for unclear prompts. Each shared Perplexity page tackles specific questions or informational requests.
- Guidance on Sharing: Users are reminded to ensure their shared threads are shareable, with a link provided to instructions on making a thread shareable.
- Perplexity AI Making Headlines: The AI search engine startup Perplexity AI has been featured in news outlets, with discussions on the channel about its recent valuation increase and fundraising efforts. TechCrunch article and CNBC interview with CEO Aravind Srinivas were shared, highlighting the companyâs growth and enterprise launch.
- CEOâs CNBC Interview Transcribed: An unofficial transcript of an exclusive CNBC interview with Perplexity Founder & CEO Aravind Srinivas was shared, along with a link to the accompanying video interview.
- Company Valuation Discussions: Members discussed the increasing valuation of Perplexity AI, which is reportedly raising at least $250 million more at a valuation of between $2.5 billion and $3 billion, marking rapid growth since its last funding round.
Links mentioned:
- CNBC Exclusive: CNBC Transcript: Perplexity Founder & CEO Aravind Srinivas Speaks with CNBCâs Andrew Ross Sorkin on âSquawk Boxâ Today: no description found
- EXCLUSIVE: Perplexity is raising $250M+ at a $2.5-$3B valuation for its AI search platform, sources say: Perplexity, the AI search engine startup, is a hot property at the moment. TechCrunch has learned that the company is currently raising at least $250
- Perplexity CTO Denis Yarats on AI-powered search: Perplexity is an AI-powered search engine that answers user questions. Founded in 2022 and valued at over $1B, Perplexity recently crossed 10M monthly active...
Perplexity AI â· #pplx-api (3 messages):
- Seeking GPT with Internet Access: A new member inquired about an API similar to GPT chat but with Internet access and up-to-date information from the web. They were provided with a link to Perplexityâs documentation and informed about the sonar online models which offer Internet access, along with an invitation to sign up for access to citations.
- A Pointer for Improved Model Performance: A member suggested enhancing performance by including one-shot examples in the prompt, possibly aiming for more precise results or better understood instructions by the model.
Stability.ai (Stable Diffusion) â· #general-chat (1044 messagesđ„đ„đ„):
- New Kid on the Block: A user stated that they are new to stable diffusion and are in the process of downloading Forge Webui, inquiring if itâs a satisfactory choice or if there are better alternatives.
- Exploring AIâs Creative Frontier: Various users discussed their interests in generating images and assets using AI tools, such as Stable Diffusion. One mentioned wanting to make game assets and another expressed desire to generate space ships and sci-fi themes.
- Technical Troubles: Several users sought technical help with issues ranging from CUDA errors and generation speed to missing nodes in ComfyUI. There are questions about using specific models in different interfaces like Forge and webui and inquiries about transferring installations between drives.
- AI Generated Futures: Casual conversations took place where users pondered using AI to create perfect representations of significant others or dream homes. There is clear excitement about the potential of AI to generate bespoke content.
- Anticipation for Stability AI Release: Users expressed curiosity and skepticism about the release and features of Stable Diffusion version 3, with some relaying information from the former CEO Emad and speculating on the timeline and true openness of the eventual release.
Links mentioned:
- Tweet from Christian Laforte (@chrlaf): @rajdhakad_ @USEnglish215753 @StabilityAI @EMostaque Our plan is to soon release the API first to collect more human preference data and validate that our safety improvements don't cause the quali...
- Crypto Wallet | Supports Bitcoin (BTC), Bitcoin Cash (BCH), Ethereum (ETH), and ERC-20 tokens: Download Bitcoin.comâs multi-coin crypto wallet. A simple and secure way to buy, sell, trade, and use cryptocurrencies. Supports Bitcoin (BTC), Bitcoin Cash (BCH), Ethereum (ETH), and ERC-20 tokens in...
- glif - StableDiffusion 3 by fab1an: no description found
- ComfyUI: A better method to use stable diffusion models on your local PC to create AI art.
- CUDA-Enabled GeForce 1650?: If you cannot find the answer in the GROMACS documentation, I would suggest asking about GROMACS configuration issues on the official GROMACS mailing list: [url]http://www.gromacs.org/Support/Mailing...
- no title found: no description found
- no title found: no description found
- Image posted by pagartomas880: no description found
- CUDA Toolkit 12.1 Downloads: Get the latest feature updates to NVIDIA's proprietary compute stack.
- Exposing the Website that Stalks You in Discord!: There is a website called spy.pet that claims to have 4 billion messages saved across Discord. With this, you can "see what your friends are doing on Discord...
- GitHub - Stability-AI/stablediffusion: High-Resolution Image Synthesis with Latent Diffusion Models: High-Resolution Image Synthesis with Latent Diffusion Models - Stability-AI/stablediffusion
- GitHub - comfyanonymous/ComfyUI: The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.: The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface. - comfyanonymous/ComfyUI
- Weird Science Official Trailer #1 - Robert Downey Jr. Movie (1985) HD: Subscribe to TRAILERS: http://bit.ly/sxaw6hSubscribe to COMING SOON: http://bit.ly/H2vZUnSubscribe to CLASSIC TRAILERS: http://bit.ly/1u43jDeLike us on FACEB...
- GitHub - AUTOMATIC1111/stable-diffusion-webui: Stable Diffusion web UI: Stable Diffusion web UI. Contribute to AUTOMATIC1111/stable-diffusion-webui development by creating an account on GitHub.
- character sheet - character sheet | Stable Diffusion LoRA | Civitai: no description found
- Reddit - Dive into anything: no description found
- GitHub - ltdrdata/ComfyUI-Manager: Contribute to ltdrdata/ComfyUI-Manager development by creating an account on GitHub.
- SOCIAL MEDIA TITLE TAG: SOCIAL MEDIA DESCRIPTION TAG TAG
- GitHub - megvii-research/HiDiffusion: Contribute to megvii-research/HiDiffusion development by creating an account on GitHub.
Nous Research AI â· #ctx-length-research (5 messages):
- Tensor Parallel with VLLM: Reference to progress on implementing tensor parallel with VLLM was made, with the anticipation of jamba support for enhancing model performance.
- Anticipating Jamba API Release: Thereâs an expression of need for a jamba API that would allow utilization of the entire context for a particular modeling task.
- Seeking Economical Context Management: A user shared the struggle with managing context economically when using Claude 3 and Big-AGI, where costs escalate quickly. They found potential solutions like memGPT and SillyTavern SmartContext, and are seeking additional solutions for efficient context management.
Nous Research AI â· #off-topic (22 messagesđ„):
- Beastie Boys Get REMASTERED: A YouTube video titled âBeastie Boys - Root Downâ was shared; itâs part of a remastered HD series that includes a backstory about the âIll Communicationâ album.
- deadmau5 & Kaskade Remembered in High Quality: Another YouTube share featured deadmau5 & Kaskadeâs track âI Remember (HQ)â, showcasing the songâs quality and providing links to more music and tour information.
- Latent Humor in CIFAR100: The CIFAR100 dataset has been humorously encoded into 100 classes and shared as latent-CIFAR100, with safetensors recommended for usage in the 488 latent size version.
- Seeking Bigger Pixels for Image Classification: A member inquired about larger image classification datasets (64x64 or 128x128) after sharing that a simple feedforward neural network yielded around 19% accuracy on a latently encoded dataset with the dimensions of 4x4x4.
- Papers on Symbol Systems and Language Models: A contribution of scholarly papers focused on language models and their symbolic representation, pointing to the semantic vector space as a phase in which symbolic meaning can emerge, analogous to language understanding in LLMs.
Links mentioned:
- Verah/latent-CIFAR100 · Datasets at Hugging Face: no description found
- Do Llamas Work in English? On the Latent Language of Multilingual Transformers: We ask whether multilingual language models trained on unbalanced, English-dominated corpora use English as an internal pivot language -- a question of key importance for understanding how language mo...
- Hellinheavns GIF - Hellinheavns - Discover & Share GIFs: Click to view the GIF
- The Linear Representation Hypothesis and the Geometry of Large Language Models: Informally, the 'linear representation hypothesis' is the idea that high-level concepts are represented linearly as directions in some representation space. In this paper, we address two close...
- Beastie Boys - Root Down: REMASTERED IN HD!Read the story behind Ill Communication here: https://www.udiscovermusic.com/stories/ill-communication-beastie-boys-album/Listen to more fro...
- deadmau5 & Kaskade - I Remember (HQ): â¶ïž https://deadmau5.ffm.to/randomalbumtitle follow deadmau5 & friends here: https://sptfy.com/PjDOcurrent tour info here: https://deadmau5.com/showsjoin the ...
Nous Research AI â· #interesting-links (20 messagesđ„):
-
DeepMindâs New Toolkit for Neural Networks: Google DeepMind has introduced Penzai, a JAX research toolkit designed to build, edit, and visualize neural networks, aiming to enhance the way researchers interact with their models.
-
Call for Beta Testers for Advanced Research Assistant: Rubik.ai is seeking beta testers for an advanced research assistant and search engine featuring models like Claude 3 Opus, GPT-4 Turbo, and others, offering two months of free premium access with the promo code
RUBIX
. -
Exploring Loss Curves in Training Large Language Models: Discussions revolved around diagnosing and understanding the unusual patterns in loss curves while training models, with speculation that low batch sizes and uneven loss landscapes might be contributing factors.
-
Archive of GPT System Prompts Now Available: EveryoneIsGross/GPTs hosts a collection of system prompts for GPT experiments, which include implementations of various papers and experiments in embeddings, RP, RAG, and other concepts.
-
Reddit Post Questions LMSYS Benchmarkâs Validity: A Reddit post challenges the usefulness of the LMSYS benchmark, suggesting it is becoming less reliable due to the difficulty in crafting questions that accurately differentiate model intelligence.
Links mentioned:
- Reddit - Dive into anything: no description found
- Tweet from vik (@vikhyatk): weird loss curve, won't be able to sleep tonight if i don't figure out what's causing those dips early on
- GitHub - google-deepmind/penzai: A JAX research toolkit for building, editing, and visualizing neural networks.: A JAX research toolkit for building, editing, and visualizing neural networks. - google-deepmind/penzai
- GitHub - EveryOneIsGross/GPTs: loading zone for my GPT experiments and tools.: loading zone for my GPT experiments and tools. Contribute to EveryOneIsGross/GPTs development by creating an account on GitHub.
Nous Research AI â· #general (650 messagesđ„đ„đ„):
- LLaMA vs Phi Showdown: Discussions intensify as members compare the newly released Phi-3-mini model against LLaMA-3 and GPT-3.5. The performance of Phi-3-mini, especially in 4-bit quantization, is scrutinized with concerns over repetitive output and model weights eagerly awaited.
- Technical Glitches at Hugging Face: Hugging Face faces downtime, with speculations around the new FineWeb dataset or LLaMA-3âs demand possibly contributing to the outages. While service has intermittently returned, issues persist.
- Tricky Model Behavior: Conversations around LLaMA-3 indicate a propensity for the models to hallucinate or fail to embrace new information after fine-tuning. The Phi-3-mini model, in particular, is reported to have issues with stopping generation and may have a misconfigured EOS token.
- Efficiency in Model Fine-Tuning: Members talk about QLoRA versus LoRA for fine-tuning large language models and share opinions on their effectiveness and potential uses in production, notably with references to QLoRA research.
- Emerging Developer Interest: Calls are made for developers engaged in models, datasets, or systems using AI models to connect, suggesting a growing community keen on discussing and potentially collaborating on AI and NLP projects.
Links mentioned:
- Tweet from Susan Zhang (@suchenzang): it seems to enjoy talking itself out of the right solution...
- EvalPlus Leaderboard: no description found
- Tweet from Nathan Lambert (@natolambert): i really hope phi 3 proves us wrong about evaluation doping and it is actually an amazing model. But, being an outlier on log compute <-> MMLU plots is a little sus.
- Tweet from Awni Hannun (@awnihannun): Next level: QLoRA fine-tuning 4-bit Llama 3 8B on iPhone 15 pro. Incoming (Q)LoRA MLX Swift example by David Koski: https://github.com/ml-explore/mlx-swift-examples/pull/46 works with lot's of mo...
- Train and Fine-Tune Sentence Transformers Models: no description found
- How faithful are RAG models? Quantifying the tug-of-war between RAG and LLMs' internal prior: Retrieval augmented generation (RAG) is often used to fix hallucinations and provide up-to-date knowledge for large language models (LLMs). However, in cases when the LLM alone incorrectly answers a q...
- lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF · Hugging Face: no description found
- NousResearch/Genstruct-7B · Hugging Face: no description found
- Tweet from Binyuan Hui (@huybery): Just evaluated coding abilities of Llama3-8B-baseđđ»
- abacaj/phi-2-super · Hugging Face: no description found
- Rage GIF - Rage - Discover & Share GIFs: Click to view the GIF
- Tweet from Guilherme Penedo (@gui_penedo): We have just released đ· FineWeb: 15 trillion tokens of high quality web data. We filtered and deduplicated all CommonCrawl between 2013 and 2024. Models trained on FineWeb outperform RefinedWeb, C4, ...
- Paper page - How Good Are Low-bit Quantized LLaMA3 Models? An Empirical Study: no description found
- Tweet from Sebastien Bubeck (@SebastienBubeck): phi-3 is here, and it's ... good :-). I made a quick short demo to give you a feel of what phi-3-mini (3.8B) can do. Stay tuned for the open weights release and more announcements tomorrow mornin...
- microsoft/Phi-3-mini-4k-instruct · Hugging Face: no description found
- OpenRouter: A router for LLMs and other AI models
- GitHub - Mozilla-Ocho/llamafile: Distribute and run LLMs with a single file.: Distribute and run LLMs with a single file. Contribute to Mozilla-Ocho/llamafile development by creating an account on GitHub.
- tokenizer_config.json · microsoft/Phi-3-mini-128k-instruct at main: no description found
- GitHub - stanfordnlp/pyreft: ReFT: Representation Finetuning for Language Models: ReFT: Representation Finetuning for Language Models - stanfordnlp/pyreft
- Beastie Boys - Sabotage: REMASTERED IN HD!Read the story behind Ill Communication here: https://www.udiscovermusic.com/stories/ill-communication-beastie-boys-album/Listen to more fro...
- Replete-AI/OpenCodeInterpreterData · Datasets at Hugging Face: no description found
- Replete-AI/Rombo-Hermes-2.5-Extra-code · Datasets at Hugging Face: no description found
- HuggingFaceFW/fineweb · Datasets at Hugging Face: no description found
- Replete-AI/Rombo-Hermes-2.5-Extra-code-sub-50k · Datasets at Hugging Face: no description found
- Replete-AI/Rombo-Hermes-2.5-Extra-code-Medium · Datasets at Hugging Face: no description found
- Streamlit: no description found
Nous Research AI â· #ask-about-llms (78 messagesđ„đ„):
- Dealing with OOM in Zero 3: A user reports that Deepspeed Zero 3 is significantly slower than Zero 2 and experiences OOM errors even with CPU offloading, wondering about normal behavior and seeking advice for optimal usage.
- Single-GPU Optimization vs. NVLink: One user ponders the best way to utilize dual RTX 3090s with NVLink for a single prompt to enhance performance while another suggests single-GPU usage is fastest, citing synchronization overhead with multi-GPU setups.
- Llama Fine-tuning and Training Guidelines: Discussions touch upon synthetic data generation for finetuning models within licensing rules, with one user warning against using generated data to improve non-Llama models and others discussing the correct ratios for example difficulty in finetuning.
- Learning Rate Techniques and Forgetting in LLMs: Users discuss whether techniques like discriminative learning rates and gradual unfreezing are prevalent in 2024, with one user unfamiliar and another confirming they are indeed in use.
- Finding Suitable Fine-tuning Guides: Multiple users suggest the best practices and resources for instruction fine-tuning, with preferences for Hugging Face blogs, avoiding Medium articles, and specific recommendations like tutorials on Labonneâs GitHub.
Links mentioned:
- chargoddard/mistral-11b-slimorca · Hugging Face: no description found
- Continual Learning for Large Language Models: A Survey: Large language models (LLMs) are not amenable to frequent re-training, due to high training costs arising from their massive scale. However, updates are necessary to endow LLMs with new skills and kee...
- LLM In-Context Recall is Prompt Dependent: The proliferation of Large Language Models (LLMs) highlights the critical importance of conducting thorough evaluations to discern their comparative advantages, limitations, and optimal use cases. Par...
- Attributed Question Answering: Evaluation and Modeling for Attributed Large Language Models: Large language models (LLMs) have shown impressive results while requiring little or no direct supervision. Further, there is mounting evidence that LLMs may have potential in information-seeking scen...
Nous Research AI â· #project-obsidian (7 messages):
- New Benchmark for AI Vision Models: xAI released their RealWorldQA benchmark dataset designed for Grok-1.5-vision-preview, offering direct question-answer scenarios.
- Confusion over Datasetâs Purpose: There was a brief confusion whether the RealWorldQA was a training set or a benchmark, later clarified to be a benchmark as mentioned on xAIâs blog post about Grok-1.5.
- Additional Dataset Interest: Some members expressed enthusiasm for the new benchmark dataset, suggesting it could be useful for testing future versions of Obsidian.
- Desire for Training Sets: Despite recognizing the usefulness of the benchmark data, members still indicated an interest in having access to a training dataset.
Links mentioned:
- Grok-1.5 Vision Preview: no description found
- xai-org/RealworldQA · Datasets at Hugging Face: no description found
Nous Research AI â· #rag-dataset (89 messagesđ„đ„):
-
Evaluating RAG with LLaMA: Discussion centers around evaluating the Retrieval-Augmented Generation (RAG) performance using the LLaMA index, suggesting that Mistral 7b v2 seems to outperform other models like LLaMA 3b instruct. A useful resource for this evaluation is shared: OpenAI Cookbook example.
-
Deciphering Superposition Prompting: The community explores a paper on a new RAG prompting method called superposition prompting that aims to process long contexts more efficiently (Superposition Prompting Paper). A member shares their practical use of the method in production with considerations about ordering the context.
-
Researchers Share RAG Insights: Several papers on RAG methodologies were shared, highlighting innovations like improving retrieval with LLMs and credibility-aware generation, as well as addressing challenges in long-context inference. Notably, an overview paper details the evolution and organization of the RAG framework (RAG Evolution Paper).
-
Function-Calling RAG Techniques: Blog posts by Pamela Fox on RAG techniques using function-calling were cited extensively as resources that do heavy-lifting for understanding and implementing RAG approaches (Pamela Foxâs RAG post). Additionally, the GitHub repository from Azure-Samples serves as an exemplar for setting up RAG approaches (Azure-Samples GitHub).
-
Fusion of Retrieval and Generation in RAG: Conversation leads towards integrating retrieval as part of an LLMâs plan to create semi-structured output grounded in document references. Examples included a blend of Cohereâs and Claude-3âs capabilities to demonstrate this approach, along with a call for creating benchmarks for RAG models that synthesize information from multiple documents (CLA Document Format).
Links mentioned:
- Tweet from Stella Biderman (@BlancheMinerva): Create a benchmark for RAG models where all of the questions require information from multiple documents to be synthesized answer them. Study how models trained on publicly released data do on it and ...
- A Survey on Retrieval-Augmented Text Generation for Large Language Models: Retrieval-Augmented Generation (RAG) merges retrieval methods with deep learning advancements to address the static limitations of large language models (LLMs) by enabling the dynamic integration of u...
- Superposition Prompting: Improving and Accelerating Retrieval-Augmented Generation: Despite the successes of large language models (LLMs), they exhibit significant drawbacks, particularly when processing long contexts. Their inference cost scales quadratically with respect to sequenc...
- Long context window tips: no description found
- Evaluate RAG with LlamaIndex | OpenAI Cookbook: no description found
- LLM-Augmented Retrieval: Enhancing Retrieval Models Through Language Models and Doc-Level Embedding: Recently embedding-based retrieval or dense retrieval have shown state of the art results, compared with traditional sparse or bag-of-words based approaches. This paper introduces a model-agnostic doc...
- RAG techniques: Function calling for more structured retrieval: no description found
- RAG techniques: Cleaning user questions with an LLM: no description found
- azure-search-openai-demo/app/backend/approaches at main · Azure-Samples/azure-search-openai-demo: A sample app for the Retrieval-Augmented Generation pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experie...
- Retrieval Augmented Generation (RAG) - Cohere Docs: no description found
- Evaluating RAG chat apps: Can your app say "I don't know"?: no description found
- GitHub - HK3-Lab-Team/PredCST: Learning Predictive Models of Concrete Syntax Tree from text.: Learning Predictive Models of Concrete Syntax Tree from text. - HK3-Lab-Team/PredCST
- Not All Contexts Are Equal: Teaching LLMs Credibility-aware Generation: The rapid development of large language models has led to the widespread adoption of Retrieval-Augmented Generation (RAG), which integrates external knowledge to alleviate knowledge bottlenecks and mi...
- RAR-b: Reasoning as Retrieval Benchmark: Semantic textual similartiy (STS) and information retrieval tasks (IR) tasks have been the two major avenues to record the progress of embedding models in the past few years. Under the emerging Retrie...
- A RAG Method for Source Code Inquiry Tailored to Long-Context LLMs: Although the context length limitation of large language models (LLMs) has been mitigated, it still hinders their application to software development tasks. This study proposes a method incorporating ...
Nous Research AI â· #world-sim (343 messagesđ„đ„):
- Creative AI Alternatives Take Stage: While awaiting the official WorldSim platformâs return, many users have shifted to alternative interpretations like Super WorldSim and Snow World Simulator hosted on HuggingChat. They are tailoring these alternatives to offer specialized experiences, such as crafting superhero universes or playing D&D-like games.
- Super WorldSim Evolves with Improvements: Continuing updates from Jetblackrlsh are introducing new features to Super WorldSim, such as Mind Meld and Improv, enhancing the user experience and aligning closer to the sophistication of Claude Opus.
- Community Imaginations Flourish: Amidst the platform alternatives, users are engaging deeply, evolving complex fictional worlds, and generating extensive phylogenetic trees to document their simulated speciesâ development over millions of years.
- Discord as a Stage for Democratic World Building: A notable trend is emerging with users like Rundeen setting up democratically controlled WorldSim bots on Discord. The community is enthusiastic about the potential for collaborative story-building and exploration.
- Open Models Pave Future of AI Simulations: A consensus seems to be forming that open-source AI models will be significant for future WorldSim-like experiences. Llama 3âs anticipated larger models have caught particular attention for their potential in driving these creative simulations forward.
Links mentioned:
- world_sim: no description found
- HuggingChat: Making the community's best AI chat models available to everyone.
- GroqCloud: Experience the fastest inference in the world
- Super World Sim - HuggingChat: Use the Super World Sim assistant inside of HuggingChat
- Snow World Simulator - HuggingChat: Use the Snow World Simulator assistant inside of HuggingChat
- Snow Singer Simulator - HuggingChat: Use the Snow Singer Simulator assistant inside of HuggingChat
- no title found: no description found
- Available now at your favorite digital store!: The Architects' Conundrum: Quantumom vs. Data Dad by Nicholas Alexander Benson
- Image Generator - HuggingChat: Use the Image Generator assistant inside of HuggingChat
- Super World Sim - HuggingChat: Use the Super World Sim assistant inside of HuggingChat
- HuggingChat: Making the community's best AI chat models available to everyone.
- HuggingChat: Making the community's best AI chat models available to everyone.
- HuggingChat: Making the community's best AI chat models available to everyone.
- HuggingChat: Making the community's best AI chat models available to everyone.
- HuggingChat: Making the community's best AI chat models available to everyone.
- HuggingChat: Making the community's best AI chat models available to everyone.
- Suzanne Treister - Amiga Videogame Stills - menu: no description found
- eternal mode âą infinite backrooms: the mad dreams of an artificial intelligence - not for the faint of heart or mind
LM Studio â· #đŹ-general (635 messagesđ„đ„đ„):
-
GPU Offloading and System Resource Usage: Users discussed the performance of LM Studio on various GPUs, with specific concerns about running models on AMD GPUs using ROCm and Nvidia GPUs. It was noted that GPU offloading is necessary for maximizing performance and if the system isnât offloading correctly, it could use the CPU at 100%, causing inefficiency.
-
Issues with LM Studio and Hugging Face: Users reported concerns regarding the inability to search and download models due to Hugging Face downtime, which seemed to affect LM Studioâs functionality, showing error messages like 503 and 500. Heyitsyorkie confirmed that Hugging Face was having API issues affecting model explorer functionalities.
-
Utilizing LLMs in LM Studio: Users sought advice on creating specific system prompts for models to role-play scenarios like a D&D campaign, as well as how to handle max token limits and rolling windows within conversations. One suggestion was to use the âAI assistant (python)â preset in LM Studio, ending prompts with an example of the expected JSON schema.
-
Model and API Issues: Discussions included queries regarding loading specific models, issues with unsupported processor instructions like AVX2, handling authorization problems, and error messages such as âUnsupported formatâ. Users requested potential fixes and workarounds.
-
AI Models and Quantization Questions: Users probed into the differences between various AI model quantizations (e.g., IQ1M vs. IQ2XS) and discussed the upcoming Llama 3 400b model, conjecturing about system requirements and capacity to run such large models.
-
LM Studio Feature Requests and Feedback: Users expressed a desire for features like running LM Studio in the background, and questioned the lack of a privacy policy. Praise was also given for making AI accessible through LM Studio.
Links mentioned:
- Tweet from LM Studio (@LMStudioAI): Model search / download within LM Studio may be impacted by this Hugging Face downtime. Stay tuned for updates âïž Quoting Hugging Face Status (@hf_status) We're experiencing some downtime on h...
- đŸ LM Studio - Discover and run local LLMs: Find, download, and experiment with local LLMs
- LMStudio | AnythingLLM by Mintplex Labs: no description found
- Local LLM Server | LM Studio: You can use LLMs you load within LM Studio via an API server running on localhost.
- LM Studio Beta Releases: no description found
- IBM Technology: Whether itâs AI, automation, cybersecurity, data science, DevOps, quantum computing or anything in between, we provide educational content on the biggest topics in tech. Subscribe to build your skills...
- OpenAI compatibility · Ollama Blog: Ollama now has initial compatibility with the OpenAI Chat Completions API, making it possible to use existing tooling built for OpenAI with local models via Ollama.
- Reddit - Dive into anything: no description found
- lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF at main: no description found
- Reddit - Dive into anything: no description found
- Big Code Models Leaderboard - a Hugging Face Space by bigcode: no description found
- Qwen/CodeQwen1.5-7B-Chat-GGUF · Hugging Face: no description found
- Vision Models (GGUF) - a lmstudio-ai Collection: no description found
- Reddit - Dive into anything: no description found
- [1hr Talk] Intro to Large Language Models: This is a 1 hour general-audience introduction to Large Language Models: the core technical component behind systems like ChatGPT, Claude, and Bard. What the...
- lmstudio-community/Meta-Llama-3-70B-Instruct-GGUF at main: no description found
- Reddit - Dive into anything: no description found
- GitHub - Crizomb/ai_pdf: Chat locally with any PDF Ask questions, get answer with usefull references Work well with math pdfs (convert them to LaTex, a math syntax comprehensible by computer): Chat locally with any PDF Ask questions, get answer with usefull references Work well with math pdfs (convert them to LaTex, a math syntax comprehensible by computer) - Crizomb/ai_pdf
- GitHub - BBC-Esq/VectorDB-Plugin-for-LM-Studio: Plugin that creates a ChromaDB vector database to work with LM Studio running in server mode!: Plugin that creates a ChromaDB vector database to work with LM Studio running in server mode! - BBC-Esq/VectorDB-Plugin-for-LM-Studio
- GitHub - mlabonne/llm-course: Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.: Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks. - mlabonne/llm-course
- Reddit - Dive into anything: no description found
- Hugging Face status : no description found
LM Studio â· #đ€-models-discussion-chat (314 messagesđ„đ„):
-
Llama 3 and Alternative Models: Users are exploring various versions of Llama 3 for better performance, comparing it against models like Goliath 120B and discussing Mistral. Conversations include the performance of Llama 3 in benchmarks and whether finetuning the variants could match up to GPT-4.
-
Meta-Llama-3-8B-Instruct-GGUF Trepidation: Concerns are raised about an infinity generation issue with Llama 3 8B Instruct GGUF, where the model continues generating content endlessly. Users are suggesting fixes involving stop strings and considering trying different model versions.
-
In Search of Unrestricted Content Creation: A discussion took place on the level of content restriction in different models like Llama 3, with suggestions to modify the system prompt to reduce censorship.
-
Phi-3 Excites and Entices: Members are evaluating Phi-3, noting its impressive performance on certain tasks despite its smaller size compared to larger models. Thereâs anticipation about Phi-3 compatibility and performance with LM Studio.
-
Technical Troubleshooting and Version Queries: Users seek help and clarification on LM Studioâs capabilities to handle models like Meta-Llama-3-8B-Instruct-Q4_K_M.gguf, the impact of context size on model performance, and OpenAIâs GPT-4 setting high standards for comparison. There are also mentions of running LM Studio on a headless server, and explanations on the meaning of terms like âmogâ.
Links mentioned:
- no title found: no description found
- lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF · Hugging Face: no description found
- Notion â The all-in-one workspace for your notes, tasks, wikis, and databases.: A new tool that blends your everyday work apps into one. It's the all-in-one workspace for you and your team
- Yoda Star GIF - Yoda Star Wars - Discover & Share GIFs: Click to view the GIF
- PyPyâs sandboxing features â PyPy documentation: no description found
- microsoft/Phi-3-mini-4k-instruct-gguf · Hugging Face: no description found
- microsoft/Phi-3-mini-128k-instruct · Hugging Face: no description found
- Models - Hugging Face: no description found
- configs/llama3.preset.json at main · lmstudio-ai/configs: LM Studio JSON configuration file format and a collection of example config files. - lmstudio-ai/configs
- Tweet from Hrishi (@hrishioa): Is anyone finetuning an instruct version of llama3-42b? Would be really interesting if it can serve as a good/smart/client-side GPT-4 replacement https://www.reddit.com/r/LocalLLaMA/comments/1c9u2jd/...
- chargoddard/llama3-42b-v0 · Hugging Face: no description found
- GitHub - OpenInterpreter/open-interpreter: A natural language interface for computers: A natural language interface for computers. Contribute to OpenInterpreter/open-interpreter development by creating an account on GitHub.
- GitHub - ggerganov/llama.cpp: LLM inference in C/C++: LLM inference in C/C++. Contribute to ggerganov/llama.cpp development by creating an account on GitHub.
- GitHub - abetlen/llama-cpp-python: Python bindings for llama.cpp: Python bindings for llama.cpp. Contribute to abetlen/llama-cpp-python development by creating an account on GitHub.
LM Studio â· #announcements (1 messages):
- Hugging Face Downtime Affects LM Studio: LM Studioâs model search and download functionality may be currently impaired due to Hugging Face downtime. The team is monitoring the situation and promises to provide updates as they come.
Link mentioned: Tweet from LM Studio (@LMStudioAI): Model search / download within LM Studio may be impacted by this Hugging Face downtime. Stay tuned for updates âïž Quoting Hugging Face Status (@hf_status) Weâre experiencing some downtime on hâŠ
LM Studio â· #đ§ -feedback (27 messagesđ„):
- Llama3 Encountering a Load Issue: Multiple users report issues loading models with Llama3 after the 0.2.20 update, prompting suggestions to post detailed problems in a specific channel. The error logs show a generic âError loading modelâ without suggestions, hinting at a potential bug due to recent updates.
- Gratitude for LM Studio: A professional writer and AI researcher expressed deep appreciation for LM Studio, stating it significantly aids their productivity. This heartfelt feedback underscores the impact of LM Studio on usersâ workflow.
- Unexpected Model Behavior Noted: A user observed llama models sometimes outputting numbers instead of answers when asked general topics. This unusual behavior suggests a potential glitch in model responses.
- VPN Causes Certificate Issues with LM Studio: Users with Zscaler VPN are unable to download models in LM Studio due to âunable to get local issuer certificateâ errors. Workarounds mentioned include downloading models on a different machine, but underlying mechanisms remain unclear, as exiting the VPN resolves the issue.
- Queries for Hugging Face Models in LM Studio Trigger Errors: Thereâs a 500 error when searching for particularly popular models on LM Studio. Users speculate that Hugging Face may be blocking terms like âLlamaâ or âLlama3â due to heavy traffic, while alternative searches using âlmstudio-communityâ work fine.
LM Studio â· #đ-prompts-discussion-chat (12 messagesđ„):
- Seeking Full Code Output: A user asked for a way to make the LLM always write full code instead of inserting comments like // Add similar event listeners for left and right buttons.
- Exploring Endless Adventure: Someone inquired about the best prompt for creating an endless sandbox adventure simulation game using Llama3 and also pondered whether Llama3 can generate prompts for itself.
- Configuring Llama-3-Smaug-8B Prompts: A member sought assistance configuring prompts in LM Studio for the Llama-3-Smaug-8B model and wondered about the correct usage of system and user prefixes and suffixes, as their attempts led to non-stop output.
- Prompt Configuration Clarification: Another user clarified that configuring prompts for the questioned model is the same as the regular included llama 3 preset in v0.2.20 of LM Studio.
- LM Studio Update and Model Search Issue: Following a discussion about the latest LM Studio build, a 503 error code issue when searching for models was reported, with a respondent referencing a Discord channel link for further assistance, but the link was provided as ânullâ.
Link mentioned: bartowski/Llama-3-Smaug-8B-GGUF · Hugging Face: no description found
LM Studio â· #đ-hardware-discussion (59 messagesđ„đ„):
- Searching for Suitable GPUs: Users in the channel discussed upgrading laptops to run LLMs with NVIDIA GPUs. A guide was shared from Reddit, titled The LLM GPU Buying Guide - August 2023, but it was noted that upgrading GPUs in laptops is uncommon and may require external solutions for some machines.
- Troubleshooting Model Loading Errors: A user encountered an âError loading modelâ issue where the GPU type was not detected, with the suggestion made to turn off GPU offloading in the settings panel which subsequently resolved the problem.
- Optimizing Hardware for Model Use: There were discussions about power consumption and efficiency related to using secondary GPUs like a GTX 1060 for running larger models, with consensus suggesting itâs worth testing but to keep expectations low due to potential latency and power draw.
- Model Preferences for Research Papers: User queries about the best models for writing research papers led to mentions of Llama 3 8B and Claude 3, with the former being criticized for AI-like responses and the latter having limitations for free users.
- Mac Memory Potential for Running LLMs: Questions regarding the capabilities of a new 128 GB Mac to run large models like Grok sparked discussions; with suggestions made to conserve memory for the OS and a link provided to increase VRAM allocation using a
sudo
command on macOS. Further, it was implied that the Mac Ultra 2 with 192 GB of RAM can run 120b models well.
Links mentioned:
- Reddit - Dive into anything: no description found
- Reddit - Dive into anything: no description found
LM Studio â· #đ§Ș-beta-releases-chat (10 messagesđ„):
-
LMStudioâs Local Model Detection Glitch: A member reported an issue with LMStudio failing to detect locally saved files within a models directory that contains an NFS mount. Despite working in version 0.2.19 beta B, the issue arose in versions 0.2.19 beta C, 0.2.19, and 0.2.20.
-
File System Hierarchy Hassle: Another member discussed the possible directory structure requirements of LMStudio, suggesting that additional directory levels above the typical maintainer/model hierarchy might contribute to the problem rather than NFS factors. The original poster confirmed using an additional directory level to differentiate between local and external storage.
-
Directory Testing Advice: It was advised to confirm the directory structure as a potential cause of the problem by testing with a local file system, ensuring that models in new sub-directories are discovered and identified by the LMStudio app.
-
Token Misconceptions Clarified: In the context of tokenization, members discussed that tokens in models do not necessarily align with syllables but can include various subword components like roots, prefixes, and suffixes. The nature of the language modelâs complexity in understanding words and tokens was explored.
-
Language Token Quantification: A member queried the convention around the number of tokens used during language model training, reflecting on whether 50,000 tokens is a standard number due to tradition, efficacy, or a balance between complexity and model performance.
LM Studio â· #autogen (20 messagesđ„):
-
Trouble with Autogen and Local LM Llama 3: Users are experiencing issues with Autogen when pointing it at local LM Llama 3, where it processes only 2 tokens and then stops. One expressed frustration, as the LM appears to be functioning but returns data prematurely.
-
A No-Marketing Zone: A member was reminded that marketing tools is not permitted on this server and asked to refrain from such activities in the future.
-
Potential Fix for Token Limitation: A user encountered a similar issue and suggested replacing âmax tokens with 3000â which seemed to resolve the problem for them. They also advised restarting Autogen, creating a new agent, and a new workflow afterwards.
-
User Proxy Quirks within Autogen: There are also reports of the user proxy occasionally stopping its output abruptly or parroting phrases like âgood job you did itâ which diminishes the user experience, particularly in comparison to using the direct API.
-
Issues with AutoGen Manager Agent: Another user inquired about difficulties in getting the AutoGen Manager agent to work with a local model, specifically running into an âunable to select speaker errorâ. There was no resolution suggested within the provided messages.
LM Studio â· #memgpt (1 messages):
- Inquiry about Project Integration: A member asked if there is a way to integrate a certain tool with LM Studio, expressing interest in accessing specific LM Studio project information if available.
LM Studio â· #amd-rocm-tech-preview (42 messagesđ„):
-
Meta Llama 3 LLM Excites Users: The Meta Llama 3 family of language models was shared, boasting dialogue optimization, helpfulness, and safety. Users are using these models successfully in LM Studio, as described in their Hugging Face repository details.
-
Performance Discussions on AMD Hardware: Members indicated Meta-Llama-3-70B and Meta-Llama-3-8B models having token generation speeds of around 20 tok/s and 60 tok/s respectively on AMD GPUs such as the 7900xtx. Thereâs curiosity about whether future versions might run on lower-end hardware.
-
ROCm Utilization Queries: A user highlighted irregular GPU utilization when inferring large models on a dual 7900XTX setup with the ROCm tech preview. The combined GPU usage didnât reflect full utilization of one card.
-
Issues and Fixes with LM Studio ROCm Preview: Users report bugs with gpu offloading in different versions of LM Studio ROCm preview. One user mentioned solving their issue by removing certain environment variables, while another switched to the regular LM Studio build due to unsupported hardware.
-
LM Studio GPU Selection Troubles and Solutions: Users discussed challenges in directing LM Studio to use a dedicated AMD GPU over an integrated one. Solutions suggested include disabling the integrated GPU in BIOS and manually setting environment variables like
HIP_VISIBLE_DEVICES
.
Links mentioned:
- NousResearch/Meta-Llama-3-70B-Instruct-GGUF · Hugging Face: no description found
- How to Disable Your Integrated Graphics on Windows 11: When games and other graphics-intensive applications starts to lag, this is what you do!
- How to Turn Your AMD GPU into a Local LLM Beast: A Beginnerâs Guide with ROCm | TechteamGB: no description found
- How to Turn Your AMD GPU into a Local LLM Beast: A Beginner's Guide with ROCm: RX 7600 XT on Amazon (affiliate): https://locally.link/kEJGLM Studio: https://lmstudio.ai/rocmProducts provided by GigabyteThose of us with NVIDIA GPUs, part...
CUDA MODE â· #general (34 messagesđ„):
- X11 Forwarding as a GUI Solution: Members discussed using X forwarding with the
ssh -X
command as a way to use Nsight Compute GUI via SSH, and a user successfully set up the GUI and provided a step-by-step guide for others to use Nsight Compute to profile remote GPUs. - Enhancing LLM Inference with âEffortâ: The new âEffortâ algorithm allows dynamic adjustment of the number of calculations during LLM inference and is detailed in a project where the source code is available on GitHub. Discussion suggested interest in implementing the algorithm in other settings like Triton or CUDA.
- DGX Boxes Come NVLinked: It was clarified that DGX boxes generally ship with NVLink installed, as they use SXM socket GPUs, supported by a resource explaining Nvidiaâs NVLink and NVSwitch.
- CUDA Matrix Multiplication Clarification: A user was confused about CUDA code for matrix multiplication; another member explained the operation as computing the dot product of a row and a column from two matrices.
- Syncing Threads in CUDA: There was a conversation around the behavior of
__syncthreads()
in CUDA, noting that starting with Volta, all non-exited threads in the block must reach the sync point, which is a change from older architectures where __syncthreads() would ignore exited threads.
Links mentioned:
- How to set up Nsight Compute Locally to profile Remote GPUs: no description found
- Effort Engine: A possibly new algorithm for LLM Inference. Adjust smoothly - and in real time - how many calculations you'd like to do during inference.
- What You Need to Know About X11 Forwarding: In this blog post, we'll deep-dive into X11 Forwarding, explaining what X11 is and how it works under the hood.
- 3. Nsight Compute â NsightCompute 12.4 documentation: no description found
- A look at Nvidia's NVLink interconnect and the NVSwitch: A look at Nvidia's NVLink interconnect and the 2-billion transistor NVSwitch that is powering Nvidia's latest DGX-2 deep learning machine.
CUDA MODE â· #triton (46 messagesđ„):
-
Grayscale Conversion Quirks Unveiled: A member faced issues with grayscaling an image using Triton after resizing without changing its dimensions, resulting in aberrant images. They shared a gist for reproduction at GitHub Gist and the original tutorial Jupyter Notebook.
-
Tackling Memory Fragmentation for Triton Kernels: After debugging, it was determined that large tensor sizes cause memory to become non-contiguous, breaking pointer arithmetic in the kernel; a utility function
check_tensors_gpu_ready
was recommended for ensuring data readiness. -
Plotting A Course for Binary Search in Triton: There is a noted gap in Tritonâs ability to perform binary search or indexing into a static codebook, a capability crucial for porting certain algorithmic examples and quantization work, as discussed in Tritonâs GitHub Issue.
-
Navigating Tritonâs Indexing and Quantization Challenges: The conversation featured an exchange of ideas on implementing binary search and addressing quantization kernels in Triton, considering the limitations and discussing possible workarounds using Tritonâs primitives like
tl.reduce
ortl.scan
. -
Deciphering
make_block_ptr
Parameter Puzzles: A discussion on Tritonâstl.make_block_ptr
functionâsorder
parameter differentiates between row-major and column-major data formats, withorder=(1,0)
meaning row-major, where the inner axis is contiguous, andorder(0,1)
meaning column-major.
Links mentioned:
- triton-samples/binary_search.py at main · Jokeren/triton-samples: Contribute to Jokeren/triton-samples development by creating an account on GitHub.
- Index in triton · Issue #974 · openai/triton: We'd like to do some indexing in triton kernels, say we have x_ptr, idx_ptr, out_ptr x = tl.load(x_ptr + offsets, mask = mask) idx = tl.load(idx_ptr + offsets, mask = mask) we have: 1. idx = idx.t...
- triton.language.make_block_ptr â Triton documentation: no description found
- triton/python/tutorials/06-fused-attention.py at main · openai/triton: Development repository for the Triton language and compiler - openai/triton
- low-bit-optimizers/lpmm/cpp_extension/fused_adamw_kernel.cu at main · thu-ml/low-bit-optimizers: Low-bit optimizers for PyTorch. Contribute to thu-ml/low-bit-optimizers development by creating an account on GitHub.
- Weird triton kernel behavior for gray scale. (Meant to be copy pasted in a colab with a T4 gpu): Weird triton kernel behavior for gray scale. (Meant to be copy pasted in a colab with a T4 gpu) - weird_triton_repro.py
- lectures/lecture 14/A_Practitioners_Guide_to_Triton.ipynb at main · cuda-mode/lectures: Material for cuda-mode lectures. Contribute to cuda-mode/lectures development by creating an account on GitHub.
CUDA MODE â· #cuda (8 messagesđ„):
-
Gratitude for Conceptual Foundations: A member expressed appreciation for a presentation that laid out the conceptional foundation for âlayout algebra,â suggesting it revealed the âreal thingâ in the subject.
-
Force Inline Queries: __forceinline and __inline were discussed, with members explaining they instruct the compiler to embed the functionâs source code in the caller context to potentially make execution faster.
-
Nsight System CLI Troubleshooting: A member resolved a profiling issue with Nsight Systems on Windows about conflicting core counts, noting that reverting to version 2023.4.4 from 2024.2.1 fixed the problem.
-
Inquiry for Performance Measurement Script: A request was made for a script to measure execution time across different thread and block configurations, but no solutions or links were provided in the messages provided.
-
Inlining and Code Optimization: Discussion highlighted that using __forceinline can lead to more optimization opportunities for the compiler, similar to how memory coalescing increases performance by reducing the need for separate function calls.
CUDA MODE â· #torch (2 messages):
- Understanding GPU Utilization in Neural Network Operations: A question was raised regarding whether operations like
torch.nn.conv2d
,torch.nn.relu
, andtorch.nn.batchnorm
result in data being transferred between CPU and GPU between each operation. It was clarified that when a GPU tensor is passed through a sequence of functions, all operations are executed on the GPU without copying back to host memory for intermediate results. - Asynchronous Execution on GPU: It was explained that operations on the GPU are scheduled asynchronously, meaning Python instructions return before the computation is complete. Blocking or synchronizing operations that require reading the value, such as
.cpu()
, will cause synchronization with the CPU.
CUDA MODE â· #announcements (1 messages):
- Lecture 15 on CUTLASS: CUDA-MODEâs Lecture 15 is starting, focusing on Cutlass. A presentation by the designated speaker is about to commence.
CUDA MODE â· #algorithms (1 messages):
andreaskoepf: https://x.com/AliHassaniJr/status/1766108184630943832
CUDA MODE â· #beginner (27 messagesđ„):
- CUDA Lectures Ongoing and Upcoming Schedules: The CUDA MODE lecture 2 has begun in the general channel; interested members can join, and another session is scheduled for the NAM time zone on Sunday. Details and planning occur in a separate invite channel, with the link shared as CUDA MODE Lecture Planning.
- Lecturerâs Engaging Style Captures Audience: Members were entertained by the lecturerâs fun and engaging style, with one quoting that the author is âquite a funny entertaining chap.â
- Matrix Multiplication Explorations in CUDA: A member asked for clarification on a matrix multiplication function, sparking a discussion and sharing of code examples, such as a Python Numba implementation for fast matrix multiplication.
- Bringing Image and Video Processing to Life with CUDA: A conversation about possible projects using CUDA included extending image processing examples to handle video processing and adding more functionalities.
- Hardware Selection for ML Tasks Discussed: Thereâs an ongoing discussion on hardware choices for machine learning systems, comparing the merits of a 2x2070 dual GPU setup and a single 4090 GPU. One member advised that the 4090 is preferable for simplicity of setup though cost concerns were raised.
Link mentioned: Discord - A New Way to Chat with Friends & Communities: Discord is the easiest way to communicate over voice, video, and text. Chat, hang out, and stay close with your friends and communities.
CUDA MODE â· #pmpp-book (2 messages):
-
Collaborative Exercise Verification: A member offers to verify exercise answers for those who have attempted the exercises; verification is conditional upon members first attempting the exercises and submitting a photo via DM. There are resources for different chapters, including Ch 2, Ch 3, Ch 4, and the highlighted Ch 5.
-
Cuda Kernel Loop Execution Query: A member is seeking clarification on why an author suggests that a simple reduction CUDA kernel loop would execute 7 times with a 256-size input and a block size of 128, as their own calculations suggest the loop should execute 8 times. They have provided screenshots of the code and authorâs claims for reference.
CUDA MODE â· #youtube-recordings (1 messages):
.bexboy: I suppose that this one session will be uploaded too?
CUDA MODE â· #jax (1 messages):
-
Memory Troubles with DenseFormer in JAX: A member is facing challenges implementing a denseformer in JAX due to high memory usage. They referenced the DenseFormerâs GitHub repository and described its efficient in-place tensor mutation in PyTorch, while noting JAX/XLAâs functional approach doesnât optimize away copies as well, leading to memory issues.
-
Exploration of Write-Once Buffers: Using inspiration from the Equinox library, the member successfully created a write-once buffer for gradients with respect to the input but ran into quadratic memory growth when computing gradients with respect to denseformer block weights.
-
Considering Custom Gradients for Lean Memory Footprint: To overcome the hurdles of quadratic memory usage, the user is considering a custom backward pass for the entire loop/scan function, a complex solution that seeks to replicate PyTorchâs efficient in-place updating within JAXâs functional paradigm. They are open to high-level suggestions on tackling this problem.
Link mentioned: equinox/equinox/internal/_loop/common.py at main · patrick-kidger/equinox: Elegant easy-to-use neural networks + scientific computing in JAX. https://docs.kidger.site/equinox/ - patrick-kidger/equinox
CUDA MODE â· #ring-attention (3 messages):
- Ring Attention Model Training Inquiry: In response to a question about implementing training with Ring Attention, another member shared a GitHub link to the Axolotl repository where code related to this is being developed. They mention having manual placement working and successful tests with tinyllama.
Link mentioned: GitHub - cuda-mode/axolotl at ring_attention_patching: Go ahead and axolotl questions. Contribute to cuda-mode/axolotl development by creating an account on GitHub.
CUDA MODE â· #off-topic (4 messages):
- Regional Surprise in MĂŒnster: Members of the CUDA MODE Discord expressed amusement upon discovering that three of them, including @umerha, live in close proximity in the MĂŒnster area, highlighting the small world of the GPU community.
- Pleasant Meetup Experience: @umerha and @t-vi shared their positive experience meeting in MĂŒnster, referring to the visit as âan honor and a pleasure.â
- Germanyâs GPU Capital Unites CUDA Enthusiasts: @umerha mentioned a âpilgrimageâ to MĂŒnster, humorously dubbing it Germanyâs GPU capital, while enjoying the company of fellow members @761222713611386900 and @719599526448463933.
CUDA MODE â· #hqq (15 messagesđ„):
-
Promising Triton Kernel Benchmarks Announced: A new fused triton
int4 / fp16
kernel was introduced, showing improved performance for various compute shapes, with detailed benchmarking results provided. The benchmark indicates that the kernel requires Triton >= 3.0.0 and comparisons with referencehqq.linear
and theint4_mm
kernel from Torch are included. -
Transposing for Better Backward Pass Efficiency: A discussion focused on the need to transpose quantized weight matrices for the backward pass in quantization during trainings. The forward pass uses
torch.matmul(x, dequantize().t())
and the backward pass needstorch.matmul(grad_output, dequantize())
, differences highlighted in the HQQ GitHub repository. -
Quantization and Performance Considerations: Members talked about the performance drop when using dequantization, noting that a typical CUDA dequantize kernel plus torch.matmul is around 15% slower than a pure torch.matmul with fp16 or bfp16.
-
Extension of Triton Kernel to Support
axis=0
: A request was made to extend the new triton kernelâs capabilities to handle computations alongaxis=0
to improve quantization quality. Relevant Triton code was shared for reference here. -
Triton Transpose Implementation Completed: The triton kernel now includes an implementation for transposed weight matrices, as requested for more efficient backward passes. The updated test and implementation were posted in the pull request on GitHub.
Links mentioned:
- hqq/hqq/kernels/triton/dequant.py at triton · mobiusml/hqq: Official implementation of Half-Quadratic Quantization (HQQ) - mobiusml/hqq
- hqq/hqq/core/quantize.py at master · mobiusml/hqq: Official implementation of Half-Quadratic Quantization (HQQ) - mobiusml/hqq
- Fused HQQ Quantization Gemm by jeromeku · Pull Request #153 · pytorch-labs/ao: @msaroufim Fused int4 / fp16 Quant Matmul Fused kernel that combines asymmetric dequantization and gemm: Dequantization: upcasts u4 / s4 weights to float16 / bfloat16, followed by groupwise scalin...
- Fused HQQ Quantization Gemm by jeromeku · Pull Request #153 · pytorch-labs/ao: @msaroufim Fused int4 / fp16 Quant Matmul Fused kernel that combines asymmetric dequantization and gemm: Dequantization: upcasts u4 / s4 weights to float16 / bfloat16, followed by groupwise scalin...
CUDA MODE â· #llmdotc (600 messagesđ„đ„đ„):
-
Atomics in CUDA and Performance Bottlenecks: Discussions focused on the removal of atomic operations from CUDA kernels, as part of performance optimization efforts. Despite concerns about how to parallelize updates when indices can vary broadly, suggestions included using scratch memory and multiple kernel calls, or pre-processing on CPU to sort indices. The contention caused by atomics and dealing with majority-repeating indices were also discussed.
-
BF16/FP16 Mixed Precision Implementation: A significant conversation around the implementation of BF16/FP16 mixed precision training revealed an approximate 1.86x performance gain. While efforts to optimize for lower precisions like FP8 were briefly mentioned, the PR (#218) introduces complexity with stochastic rounding and managing optimizer state that requires BF16/FP16. The latest implementation for layernorm maintains FP32 due to slow performance with BF16 atomic operations.
-
CUDA Version Requirements in FP16 Conversion: Compiling errors occurred due to an older CUDA version on one of the devices, highlighting a dependency on newer CUDA versions for BF16 support. The problem with cuBLAS not accepting FP8 biases for FP8 matmuls, requiring BF16 biases instead, was also noted.
-
Kernel Optimization and Profiling: Some community members shared insights and progress on optimizing CUDA kernels using techniques like dtype sizing and float4 vectors, potentially leading to a 2x speedup in GELU and AdamW kernels. A suggestion to update the kernel development scripts to reflect real-world sizing for better profiling accuracy was proposed.
-
Optimizing Memory-Throttled Kernels with Thread Coarsening: During a community collaboration session, thread coarsening was applied to the AdamW kernel to improve its performance due to the kernel being memory bound. This optimization batched memory requests to be more parallelized, aiming for future enhancements, especially post-transition to FP16.
Links mentioned:
- zhizhinpeter - Twitch: Coding Multi-GPU for llm.c
- 8-bit Optimizers via Block-wise Quantization: Stateful optimizers maintain gradient statistics over time, e.g., the exponentially smoothed sum (SGD with momentum) or squared sum (Adam) of past gradient values. This state can be used to accelerate...
- YouTube: no description found
- Examples â NCCL 2.21.5 documentation: no description found
- Examples â NCCL 2.21.5 documentation: no description found
- llm.c/dev/cuda/encoder_backward.cu at master · karpathy/llm.c: LLM training in simple, raw C/CUDA. Contribute to karpathy/llm.c development by creating an account on GitHub.
- flash_attn_jax/csrc/flash_attn/src at main · nshepperd/flash_attn_jax: JAX bindings for Flash Attention v2. Contribute to nshepperd/flash_attn_jax development by creating an account on GitHub.
- GitHub - KernelTuner/kernel_float: CUDA header-only library for working with vector types (half2, float4, double2) and reduced precision math (half, e5m2) inside kernel code: CUDA header-only library for working with vector types (half2, float4, double2) and reduced precision math (half, e5m2) inside kernel code - KernelTuner/kernel_float
- clang: lib/Headers/__clang_cuda_intrinsics.h Source File: no description found
- Support for FP16/BF16 in train_gpt2.cu (1.86x Perf) by ademeure · Pull Request #218 · karpathy/llm.c: Now finished and reasonably happy with it! 1.86x performance on my RTX 4090: FP32: ~80ms BF16: ~43ms (with layernorm params in FP32, but all activations in BF16) This allows the same train_gpt2.c...
- How to go from 0 to speeding up LLM.c - CUDA Kernel Profiling setup: Commands to run after getting the instance setup:git clone https://github.com/karpathy/llm.c.gitexport PATH=/usr/local/cuda/bin:$PATHsource ~/.bashrcsudo apt...
- Added shared memory for the atomic additions for the layernorm_back by ChrisDryden · Pull Request #210 · karpathy/llm.c: This cr was made to address the issue found in the profiler that the atomic operations in the final loop of this kernel were causing a bunch of warp stalls. By doing the atomic operation on shared ...
- bug: something goes wrong at larger batch sizes · Issue #212 · karpathy/llm.c: There's some bug I have difficulty tracking down today and I'm going to give up for tonight and try again tomorrow. Reproduction: ./train_gpt2cu -b 12 launches the job with batch size 12. On m...
- flash-attention/csrc/flash_attn/src/flash_fwd_kernel.h at main · Dao-AILab/flash-attention: Fast and memory-efficient exact attention. Contribute to Dao-AILab/flash-attention development by creating an account on GitHub.
- Faster `matmul_backward_bias` using coalesced reads and shared memory in the kernel by al0vya · Pull Request #221 · karpathy/llm.c: This kernel seems to offer a <4x runtime improvement over matmul_backward_bias_kernel2 on an RTX 2070 Super GPU, runtime comparison shown below: matmul_backward_bias_kernel2: block_size 32 time 0.9...
- cuDNN Forward Attention + FP16 non-cuDNN version in /dev/cuda/ by ademeure · Pull Request #215 · karpathy/llm.c: Previous Kernel 4: 1.74ms Kernel 4 with TF32: 1.70ms Kernel 5 (4 with BF16 I/O): 0.91ms Kernel 6 (5 without permute, not realistic): 0.76ms Kernel 10 (cuDNN BF16, with FP32 conversion): 0.33ms Kern...
- speed up the backward bias kernel by 45% and speed up the full runnin⊠· karpathy/llm.c@8488669: âŠg time by 1%
CUDA MODE â· #massively-parallel-crew (29 messagesđ„):
-
Introducing the Moderator Role: A new role called âModeratorâ has been introduced to manage users and content, with permissions that include timing out, kicking, banning users, and deleting inappropriate messages. Moderators can also create and edit events, and manage the stage to maintain a friendly environment for GPU and massively parallel programming discussions.
-
Technical Difficulties in Recording Panel Discussions: Members discussed technical issues experienced during the recording of a panel discussion. The conversation included coordination to meet before future talks to ensure recording setups are functioning well, and the possibility of re-recording talks if necessary.
-
Backup Recordings Save the Day: One member reported an abrupt end to their recording session, but it was covered by another memberâs backup. They confirmed that combined materials from two recordings should suffice for a complete session.
-
Scheduling Future Talks and Dry Runs: As several events were upcoming, members coordinated about being prepared 15 minutes before the scheduled time to ensure technical setups were in place. One member noted their unavailability for recording on one of the days, but offered to handle session recording and post-production the following day.
-
Open Invitation for FlashAttention Code Deep-Dive: After a tweet was shared about FlashAttention, the idea of a specialized deep-dive event was proposed, although no immediate plans were made. Additionally, members suggested reaching out to Tri Dao for a potential talk regarding his work on Flash decoding, with an acknowledgment that he has previously presented on related topics.
Link mentioned: Flash Attention 2.0 with Tri Dao (author)! | Discord server talks: â€ïž Become The AI Epiphany Patreon â€ïžhttps://www.patreon.com/theaiepiphanyđšâđ©âđ§âđŠ Join our Discord community đšâđ©âđ§âđŠhttps://discord.gg/peBrCpheKEHey gâŠ
Eleuther â· #general (262 messagesđ„đ„):
-
LLM Local App Speculations: Users discussed the feasibility of running LLMs locally on smartphones, focusing on the Eleuther community potentially developing an easy-to-use app. Memory bandwidth and GPU capabilities of different smartphone models, like the Samsung S24 Ultra and Snapdragon, were referenced, suggesting even 7-8B models might be potentially usable.
-
Technological Diving into Smartphone Capabilities: Conversations delved into the hardware specifications of modern smartphones, such as the Samsung Exynos 2400 chipset, to estimate the performance of LLMs running locally. Specs like the 6.4 Gbps pin speed and 51.2 GB/s memory bandwidth were scrutinized, and speculative decoding was suggested as a possible method to improve token generation rates.
-
Examining Existing Apps for Local LLM Use: Users explored existing solutions like MLC-LLM for deploying AI models natively on devices. They also discussed other apps found on the App Store and Play Store, such as âMLC Chatâ and âPrivate AIâ, which utilize offline LLMs, indicating there are some current applications attempting this endeavor.
-
Hugging Face Downtime and Business Model Debate: Extended downtime on Hugging Face triggered a debate regarding its business model. Users pondered over its strategies, comparing it to platforms like GitHub, and questioned the sustainability of providing free hosting for large AI models.
-
Discussions on Reasoning in LLMs Beyond CoT: The conversation turned to evaluating reasoning in LLMs with various methods such as Chain-of-Thought (CoT). A recent research paper integrating Monte Carlo Tree Search with LLMs was suggested as an alternative to CoT reasoning (AlphaLLM).
-
Cost Analysis of LLM Training: Discussions touched on the costs associated with training large models like Llama 2, considering factors such as GPU hours and token quantities. It also highlighted the potential underestimation of costs without thorough mathematical calculation.
Links mentioned:
- Gemini Nano now running on Pixel 8 Pro â the first smartphone with AI built in: Gemini is here, the most capable and flexible AI model we've ever built. Plus more AI updates coming to the Pixel portfolio.
- Private AI - Apps on Google Play: no description found
- Android App â mlc-llm 0.1.0 documentation: no description found
- Samsung Exynos 2400: specs and benchmarks: Samsung Exynos 2400: performance tests in benchmarks (AnTuTu 10, GeekBench 6). Battery life and full specifications.
- âMLC Chat: âMLC Chat lets users chat with open language models locally on ipads and iphones. After a model is downloaded to the app, everything runs locally without server support, and it works without internet ...
- no title found: no description found
- GitHub - mlc-ai/mlc-llm: Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.: Enable everyone to develop, optimize and deploy AI models natively on everyone's devices. - mlc-ai/mlc-llm
- Stella Nera: Achieving 161 TOp/s/W with Multiplier-free DNN Acceleration based on Approximate Matrix Multiplication: From classical HPC to deep learning, MatMul is at the heart of today's computing. The recent Maddness method approximates MatMul without the need for multiplication by using a hash-based version o...
- GitHub - Kotlin/kotlindl: High-level Deep Learning Framework written in Kotlin and inspired by Keras: High-level Deep Learning Framework written in Kotlin and inspired by Keras - Kotlin/kotlindl
- Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing: Despite the impressive capabilities of Large Language Models (LLMs) on various tasks, they still struggle with scenarios that involves complex reasoning and planning. Recent work proposed advanced pro...
- Large Language Models On-Device with MediaPipe and TensorFlow Lite - Google for Developers: no description found
- Samsung Galaxy S24 Ultra review: Samsung's S24 family is launching with Samsung's latest One UI 6.1 on top of Google's latest Android 14. Despite the fairly small ".1" numbering update,...
- Inappropriate Content - Play Console Help: no description found
- GitHub - atfortes/Awesome-LLM-Reasoning: Reasoning in Large Language Models: Papers and Resources, including Chain-of-Thought, Instruction-Tuning and Multimodality.: Reasoning in Large Language Models: Papers and Resources, including Chain-of-Thought, Instruction-Tuning and Multimodality. - GitHub - atfortes/Awesome-LLM-Reasoning: Reasoning in Large Language M...
- LPDDR5 | DRAM | Samsung Semiconductor Global: Meet LPDDR5 powering next-generation applications with performance and efficiency by 6,400 Mbps of pin speed, massive transfer at 51.2Gb/s, and 20% power saving.
Eleuther â· #research (443 messagesđ„đ„đ„):
-
Diffusion Model Inference Steps Discussion: Diffusion models trained on higher steps, like 300 or 1000, can be effectively used for inference with significantly fewer steps, such as 10-30 steps. Thereâs a consensus that the number of training steps doesnât greatly affect the quality at a given inference step count.
-
Token-Free Language Models: The SpaceByte paper proposes a novel byte-level architecture trying to close the gap between subword and byte-level autoregressive language modeling. It was noted that tokenizers can potentially leak information about subsequent tokens, which could be seen as a significant nuisance, especially for applications such as autocompletes.
-
Concerns About âFinewebâ Datasetâs Relation to LLaMA: While Fineweb offers 15 trillion tokens of CommonCrawl data and claims high performance, members questioned its relationship to LLaMAâs dataset and expressed skepticism about the lack of dataset decontamination. The effects of Finewebâs performance will be closely monitored over time.
-
AI-Designed CRISPR-Cas Protein: A Large Language Model, ProGen2, was successfully used to design new CRISPR-Cas protein sequences that were then tested in a lab, yielding variants with improved specificity. This breakthrough example by Profluent Bio indicates the potential of LLMs in accelerating scientific discovery.
-
Prompt Priority for Safe Large Language Models: A new paper suggests addressing safety vulnerabilities in LLMs by training models to prioritize instructions based on a defined hierarchy. This approach aims to increase robustness against prompt injections and other attacks without the need for additional preference labels or demonstrations.
Links mentioned:
- Tweet from Stella Biderman (@BlancheMinerva): Create a benchmark for RAG models where all of the questions require information from multiple documents to be synthesized answer them. Study how models trained on publicly released data do on it and ...
- The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions: Today's LLMs are susceptible to prompt injections, jailbreaks, and other attacks that allow adversaries to overwrite a model's original instructions with their own malicious prompts. In this w...
- Self-Supervised Alignment with Mutual Information: Learning to Follow Principles without Preference Labels: When prompting a language model (LM), users frequently expect the model to adhere to a set of behavioral principles across diverse tasks, such as producing insightful content while avoiding harmful or...
- SpaceByte: Towards Deleting Tokenization from Large Language Modeling: Tokenization is widely used in large language models because it significantly improves performance. However, tokenization imposes several disadvantages, such as performance biases, increased adversari...
- MambaByte: Token-free Selective State Space Model: Token-free language models learn directly from raw bytes and remove the inductive bias of subword tokenization. Operating on bytes, however, results in significantly longer sequences. In this setting,...
- Hyper-SD: Trajectory Segmented Consistency Model for Efficient Image Synthesis: Recently, a series of diffusion-aware distillation algorithms have emerged to alleviate the computational overhead associated with the multi-step inference process of Diffusion Models (DMs). Current d...
- Mixture-of-Depths: Dynamically allocating compute in transformer-based language models: Transformer-based language models spread FLOPs uniformly across input sequences. In this work we demonstrate that transformers can instead learn to dynamically allocate FLOPs (or compute) to specific ...
- stabilityai/stablelm-3b-4e1t · Hugging Face: no description found
- Transformers are Multi-State RNNs: Transformers are considered conceptually different compared to the previous generation of state-of-the-art NLP models - recurrent neural networks (RNNs). In this work, we demonstrate that decoder-only...
- Profluent: We are fluent in the language of protein design.
- A Thorough Examination of Decoding Methods in the Era of LLMs: Decoding methods play an indispensable role in converting language models from next-token predictors into practical task solvers. Prior research on decoding methods, primarily focusing on task-specifi...
- Soaring from 4K to 400K: Extending LLM's Context with Activation Beacon: The utilization of long contexts poses a big challenge for LLMs due to their limited context window size. Although the context window can be extended through fine-tuning, it will result in a considera...
- Larimar: Large Language Models with Episodic Memory Control: Efficient and accurate updating of knowledge stored in Large Language Models (LLMs) is one of the most pressing research challenges today. This paper presents Larimar - a novel, brain-inspired archite...
- GitHub - microsoft/LLMLingua: To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.: To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss. - GitH...
- GitHub - krafton-ai/mambaformer-icl: MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248: MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248 - krafton-ai/mambaformer-icl
- Design of highly functional genome editors by modeling the universe of CRISPR-Cas sequences: Gene editing has the potential to solve fundamental challenges in agriculture, biotechnology, and human health. CRISPR-based gene editors derived from microbes, while powerful, often show significant ...
- Lossless Acceleration of Large Language Model via Adaptive N-gram Parallel Decoding: While Large Language Models (LLMs) have shown remarkable abilities, they are hindered by significant resource consumption and considerable latency due to autoregressive processing. In this study, we i...
- Editing Models with Task Arithmetic: Changing how pre-trained models behave -- e.g., improving their performance on a downstream task or mitigating biases learned during pre-training -- is a common practice when developing machine learni...
- Large Language Models on Graphs: A Comprehensive Survey: Large language models (LLMs), such as GPT4 and LLaMA, are creating significant advancements in natural language processing, due to their strong text encoding/decoding ability and newly found emergent ...
- HuggingFaceFW/fineweb · Datasets at Hugging Face: no description found
- Sisihae GIF - Sisihae - Discover & Share GIFs: Click to view the GIF
- Why do small language models underperform? Studying Language Model Saturation via the Softmax Bottleneck: Recent advances in language modeling consist in pretraining highly parameterized neural networks on extremely large web-mined text corpora. Training and inference with such models can be costly in pra...
- Towards Graph Foundation Models: A Survey and Beyond: Foundation models have emerged as critical components in a variety of artificial intelligence applications, and showcase significant success in natural language processing and several other domains. M...
- [AINews] FineWeb: 15T Tokens, 12 years of CommonCrawl (deduped and filtered, you're welcome): AI News for 4/19/2024-4/22/2024. We checked 6 subreddits and 364 Twitters and 27 Discords (395 channels, and 14973 messages) for you. Estimated reading time...
- On Limitations of the Transformer Architecture: no description found
Eleuther â· #scaling-laws (35 messagesđ„):
-
Twitter Confrontation Over Rounding Data: A member expressed frustration over being blocked on Twitter after criticizing someone for rounding numbers in their publication, sharing a tweet as evidence. The conversation evolved around the tone and approach used, with others pointing out that the memberâs direct tone might come off as rude or confrontational.
-
Tone Matters in Critical Conversations: Other members joined the discussion, suggesting that the original posterâs tone might have been perceived as aggressive or trolling, which could lead to defensive reactions. They emphasized the importance of a friendly and constructive tone when engaging in debates, especially when trying to convey criticism.
-
Misunderstandings in Communication Identified: It was suggested that confusion arose because the member incorrectly attributed the rounding of data to the replication team, while in fact, the original Chinchilla paper authors reported rounded results. Clarifications were made about the capabilities of TeX in handling significant figures and rendering vector formats like SVG.
-
Critique of Chinchilla Paper and Replication Methodology: The member clarified his original critique, noting that the real issue was not the rounding itself but the replication authors not noticing the residuals were not centered around zero, which could indicate a mistake in their replication process. This detailed feedback was part of a larger discussion critiquing the methodologies used in the Chinchilla paper reproduction.
-
Constructive Dissection of Social Media Interaction: Participants dissected the nuances of online communication and jokingly crafted a template for friendly internet discourse, highlighting the balance needed between being direct and including âneurotypical decorationâ in posts to avoid being misunderstood.
Link mentioned: Tweet from Kyo (@kyo_takano): You ARE rounding the original estimate lol Try inspecting the TeX source like you did PDF figures. To be more specific, you rounded: - E from exp(0.5267228) to 1.69 - A from exp(6.0073404) to 406.4 âŠ
Eleuther â· #interpretability-general (2 messages):
- Exponential Growth in Residual Stream Norms Uncovered: A shared post from LessWrong reveals that the norm of each residual stream in language models like GPT2-XL grows exponentially during the forward pass. The summarized paper suggests LayerNorm makes it difficult to cancel out existing features, thereby allowing new features to overshadow by increasing 4.5% per layer.
Link mentioned: Residual stream norms grow exponentially over the forward pass â LessWrong: Summary: For a range of language models and a range of input prompts, the norm of each residual stream grows exponentially over the forward pass, witâŠ
Eleuther â· #lm-thunderdome (8 messagesđ„):
- Seeking Forked Sanity: A member humorously noted that research groups prefer running private forks of the lm evaluation harness instead of engaging in direct model comparisons.
- Token Inquiry at Evaluation: A question was raised regarding whether the eval-harness automatically adds a beginning-of-sequence token.
- Experimenting with MMLU Task Implementation: A member proposed adding an MMLU task implementation using the arc prompt format, aimed at investigating the impact of MMLU prompt format on model scores.
- Call for Genericization in Task Implementation: In response to the proposal, another member suggested to ideally create a generic implementation capable of supporting various styles like âarc styleâ and âMMLU styleâ for all MCQA tasks, though expressing interest in the current specific implementation until a more general one is developed.
- Parallel Metrics Exploration: A query was posted about executing metrics from the lm-evaluation-harness in parallel, with a request for further elaboration on the specific needs.
Eleuther â· #gpt-neox-dev (14 messagesđ„):
-
Discussing RWKV integration with GPT-NeoX: Developers are currently focused on integrating RWKV (Rethinking Weighted Key-Value Memory Networks) into GPT-NeoX. The integration work can be tracked through GitHub Issue #1167 and involves disabling bf16, PP, TP, MoE, and adding fp16 support and JIT kernel compilation among other tasks.
-
FP16 Support Being Integrated: A new branch containing integration for fp16 and fp32 support for RWKV within GPT-NeoX has been pushed by a developer, available here. The integration is simple and pending testing with the NeoX trainer.
-
Kernel Enhancement and Code Transfer: A developer has newly optimized kernel code ready for RWKV, which could potentially allow for full state-gradients for future BPTT use. This new method and code are available on the developerâs GitHub fork, specifically the branch rwkv-6-support.
-
RWKV Version Numbering Suggested: Due to the iterative nature of the RWKV integration work, itâs been suggested to implement version numbering to identify different iterations, such as ârwkv 6.0â. The best approach for this naming conventionâbe it file, class, or directory specificâis still under consideration.
Links mentioned:
- GitHub - RWKV/RWKV-infctx-trainer at rwkv-6-support: RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond! - GitHub - RWKV/RWKV-infctx-trainer at rwkv-6-support
- Comparing main...rwkv-6-support · RWKV/RWKV-infctx-trainer: RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond! - Comparing main...rwkv-6-support · RWKV/RWKV-infctx-trainer
- Add Basic RWKV Block to GPT-NeoX · Issue #1167 · EleutherAI/gpt-neox: We want to add RWKV to gpt-neox: Add basic RWKV block, without kernels, from https://github.com/BlinkDL/RWKV-LM to https://github.com/EleutherAI/gpt-neox/tree/main/megatron/model Add rwkv kernels A...
- GitHub: Letâs build from here: GitHub is where over 100 million developers shape the future of software, together. Contribute to the open source community, manage your Git repositories, review code like a pro, track bugs and fea...
- GitHub - SmerkyG/gpt-neox at rwkv: An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library. - GitHub - SmerkyG/gpt-neox at rwkv
HuggingFace â· #general (473 messagesđ„đ„đ„):
- Hugging Face Downtime Concerns: Several users reported experiencing 504 Gateway Time-outs and service disruptions while trying to access or use Hugging Face, indicating potential downtime or server issues.
- Meta-Llama 3 Integration Questions: Users discussed the integration of Meta Llama 3 with serverless inference API and whether features like system prompts are supported when making requests.
- Autotrain Inquiry: There was a query about whether AutoTrain supports custom models like phi-3 for fine-tuning, which was addressed by pointing to Hugging Face documentation and previous successful usage.
- Model Upload Hurdles: A user sought help for uploading GGUF files to Hugging Face due to a size limit, which prompted advice on using sharding or splitting files to accommodate service constraints.
- Exploring OCR Options: Discussion centered on finding an effective OCR solution for reading float numbers, with options like PaddleOCR and kerasOCR being mentioned as potentially better alternatives to tesseract and EasyOCR.
Links mentioned:
- Google Colaboratory: no description found
- Tweet from abhishek (@abhi1thakur): Phi-3 is here!!!! đ and ofcourse, you can already fine-tune it using AutoTrain đđđ
- Pretraining on the Test Set Is All You Need: Inspired by recent work demonstrating the promise of smaller Transformer-based language models pretrained on carefully curated data, we supercharge such approaches by investing heavily in curating a n...
- Llama 3-70B - HuggingChat: Use the Llama 3-70B assistant inside of HuggingChat
- Resident Evil Resident Evil Welcome To Raccoon City GIF - Resident Evil Resident Evil Welcome To Raccoon City Resident Evil Movie - Discover & Share GIFs: Click to view the GIF
- Turn Down For What Snoop Dogg GIF - Turn Down For What Snoop Dogg Cheers - Discover & Share GIFs: Click to view the GIF
- Jinx The Cat Jinx GIF - Jinx The Cat Jinx Jinx Cat - Discover & Share GIFs: Click to view the GIF
- Im Dead Dead Bruh GIF - Im Dead Dead Bruh Skeleton Dead Bruh - Discover & Share GIFs: Click to view the GIF
- HF-Mirror - Huggingface éćç«: no description found
- meta-llama/Meta-Llama-3-70B-Instruct · Hugging Face: no description found
- Eyeverse Brace GIF - Eyeverse Brace Initiation - Discover & Share GIFs: Click to view the GIF
- Upload files to the Hub: no description found
- Dinela GIF - Dinela - Discover & Share GIFs: Click to view the GIF
- Google Colaboratory: no description found
- TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-GPTQ · Hugging Face: no description found
- Cat Club Cat GIF - Cat Club Cat Cat Dance - Discover & Share GIFs: Click to view the GIF
- Hugging Face â The AI community building the future.: no description found
- Upload files to the Hub: no description found
- Albert Einstein - HuggingChat: Use the Albert Einstein assistant inside of HuggingChat
- Meta Llama 3 | 8B API Documentation (swift-api-swift-api-default) | RapidAPI: no description found
- View paste 3MUQ: no description found
- Reddit - Dive into anything: no description found
- The Rise of AI: (Hidupkan Closed Caption)(Turn on the Closed Caption)Bergabunglah bersama kami dalam perjalanan melalui evolusi cepat Artificial Intelligence, mulai dari kem...
- "It's A UNIX System!" | Jurassic Park | Science Fiction Station: Hackerman Lexi (Ariana Richards) shows off her nerd skills as she tries to fix Jurassic Park's UNIX control system.Jurassic Park (1993): John Hammond, an ent...
- Discord - A New Way to Chat with Friends & Communities: Discord is the easiest way to communicate over voice, video, and text. Chat, hang out, and stay close with your friends and communities.
- Hugging Face status : no description found
- MTEB Leaderboard - a Hugging Face Space by mteb: no description found
- Reddit - Dive into anything: no description found
HuggingFace â· #today-im-learning (13 messagesđ„):
- Studying AIâs Speed, Cost, and Quality: A video titled âORPO with LLaMa 3- Fast, Cheap, and Good!â discusses innovations in AI that challenge the old saying âFast, Cheap, Good- Pick two.â The video can be found on YouTube.
- First Reinforcement Learning Model Success: A member learned how to create their first reinforcement learning model and shared a Hugging Face model card for a PPO agent trained to play LunarLander-v2.
- Exploring Tokenization: One member is focusing on learning about tokenizers today.
- Dependency on Hugging Face: A member remarked on their continued reliance on Hugging Faceâs resources even with local model installations.
- Creating RAG Systems with AI Agents: Members are learning to construct RAG systems utilizing the Llamaindex and are also exploring implementation with offline, open-source models using libraries like transformer.js.
Links mentioned:
- wsqstar/ppo-LunarLander-v2 · Hugging Face: no description found
- ORPO with LLaMA 3- Fast, Cheap, and Good!: The old saying goes "Fast, Cheap, Good- Pick two". AI has been no different, but we're starting to see some great innovations to change that. Great article f...
- Build an Agent with Long-Term, Personalized Memory: This video explores how to store conversational memory similar to ChatGPT's new long-term memory feature.We'll use LangGraph to build a simple memory-managin...
- (RVC) I Can't Dance (AI Cover Mashup) (READ DESC): #aicover #icantdance #genesis Disclaimer: This is a simple and fun AI mashup video I made during my spare time utilizing my and other people's AI voice model...
HuggingFace â· #cool-finds (21 messagesđ„):
- Exploring Quantum Computing: A video titled âNew quantum computers - Potential and pitfalls | DW Documentaryâ was shared, discussing the capabilities of new supercomputers in potentially reducing animal experiments and curing cancer.
- Neural Networks Demystified: A member shared a YouTube video titled âWhy Neural Networks can learn (almost) anythingâ, which explains the functioning and usefulness of neural networks.
- Voice-Prompted AI Image Generation: An intriguing Twitter post demonstrates live streaming of high-resolution images generated by AI in response to spoken (whisper) voice commands.
- Comprehensive Offline RL Framework Revealed: The message highlighted Hokoff, a resource providing pre-collected datasets and a framework for Offline Reinforcement Learning and Multi-Agent Reinforcement Learning research.
- Interactive JavaScript for đ€ Transformers: A tool was introduced that allows running HuggingFace Transformers directly in the browser; explore it at transformers.js.
Links mentioned:
- Self-Reasoning Tokens, teaching models to think ahead.: What is the mathematical formulation of reasoning? How can we make LLMs like chatGPT think before they speak? And how can we make that baked into the model so it can learn to think in a self-supervise...
- PhysDreamer: Physics-Based Interaction with 3D Objects via Video Generation: Realistic object interactions are crucial for creating immersive virtual experiences, yet synthesizing realistic 3D object dynamics in response to novel interactions remains a significant challenge. U...
- Transformers.js: no description found
- Neural Networks: Zero To Hero: no description found
- Hokoff: Abstract
- ByteDance (ByteDance): no description found
- New quantum computers - Potential and pitfalls | DW Documentary: A new supercomputer is slated to make it possible to reduce animal experiments and perhaps to cure cancer. The hype surrounding quantum computing is inspirin...
- Why Neural Networks can learn (almost) anything: A video about neural networks, how they work, and why they're useful.My twitter: https://twitter.com/max_romanaSOURCESNeural network playground: https://play...
HuggingFace â· #i-made-this (25 messagesđ„):
-
Math PDFs Transformed into Conversational Partners: Crizomb introduced an open-source Retriever-Answer Generator (RAG) project, ai_pdf, that enables users to chat with any PDF locally; it is particularly effective with math documents by converting them to LaTeX for easy processing by computers.
-
Groundbreaking Real-time Video Generation: Aifartist shared a Reddit post showcasing a 2.5-minute video generated in real-time through voice direction. They emphasize the quick feedback loop and potential for real-time movie creation by just using voice commands.
-
Infini Attention Explained Simply: Subham5089 wrote a simplified explanation of the new Infini Attention, designed to help with understanding its impact on AI and shared this write-up on LinkedIn.
-
Innovative Bot Programming Achieved: Acoloss shared an amusing update about their project, which involves bots with individual memories/history performing actions based on their capabilities. They noted the implementation is functioning surprisingly well with thoughtful output communication.
-
3LCâs Beta Launch to Revolutionize Datasets: The 3LC platform has been announced, offering tools to refine datasets and ML models, enhancing Computer Vision with plans to extend support to LLMs. Users can join the beta to shape the platformâs development, with exclusive access for 100 users and free non-commercial use.
Links mentioned:
- ehristoforu/llama-3-12b-instruct · Hugging Face: no description found
- ehristoforu/Gixtral-100B · Hugging Face: no description found
- Home: no description found
- ehristoforu/Gistral-16B · Hugging Face: no description found
- bineric/NorskGPT-Llama3-8b · Hugging Face: no description found
- QuantFactory/Meta-Llama-3-70B-Instruct-GGUF · Hugging Face: no description found
- RAG chatbot using llama3: no description found
- Reddit - Dive into anything: no description found
- GitHub - Crizomb/ai_pdf: Chat locally with any PDF Ask questions, get answer with usefull references Work well with math pdfs (convert them to LaTex, a math syntax comprehensible by computer): Chat locally with any PDF Ask questions, get answer with usefull references Work well with math pdfs (convert them to LaTex, a math syntax comprehensible by computer) - Crizomb/ai_pdf
- Outpainting Demo - a Hugging Face Space by clinteroni: no description found
- VTuberLogoGenerator - a Hugging Face Space by gojiteji: no description found
- moondream2-batch-processing - a Hugging Face Space by Csplk: no description found
HuggingFace â· #computer-vision (4 messages):
-
Seeking Architecture for Invoice Data Extraction: A member is working on a project to extract data from invoices and receipts which are scanned images and is seeking an architecture to create a machine learning model for this task.
-
TrackNetV3 in Action: A member has shared the TrackNetV3 repository but is inquiring about processing the modelâs output for each frame read, rather than reading all frames and computing.
-
Introducing Themselves: A user named jackwean_75093 has joined and greeted the community.
-
Quest for Personal Knowledge Base Construction: The same user, jackwean_75093, asked about how to build a private knowledge base but provided no further details.
Link mentioned: GitHub - qaz812345/TrackNetV3: Implementation of paper - TrackNetV3: Enhancing ShuttleCock Tracking with Augmentations and Trajectory Rectification: Implementation of paper - TrackNetV3: Enhancing ShuttleCock Tracking with Augmentations and Trajectory Rectification - qaz812345/TrackNetV3
HuggingFace â· #NLP (10 messagesđ„):
- Seeking M2M100 Finetuning: A member is asking for a finetuning code for the M2M100 model.
- Request for PHI-2 Tuning Assistance: A member is looking for help with fine-tuning the PHI-2 model.
- Batch Size Strategy for Fine-tuning: Discussions suggest starting with smaller batch sizes, such as 32, and adjusting upwards to find the optimal batch size for a 2.7B model on 16GB memory, with gradient accumulation as a possible solution.
- Rust Port of minbpe Announced: The
minbpe-rs
project is a Rust port ofminbpe
and is available on GitHub with features likeGPT4Tokenizer
,save
,load
, and atrain
function. The project is led by @gnp with contributions to the documentation and README. Check out the project. - Dependency Clash and Dataset Acquisition Trouble: One member mentions Bertopicâs new release causing dependency conflicts with OpenAIâs and has temporarily locked their script to version 0.16.0. Simultaneously, another member seeks assistance in integrating the go-emotions dataset into their project.
Link mentioned: GitHub - gnp/minbpe-rs: Port of Andrej Karpathyâs minbpe to Rust: Port of Andrej Karpathyâs minbpe to Rust. Contribute to gnp/minbpe-rs development by creating an account on GitHub.
HuggingFace â· #diffusion-discussions (10 messagesđ„):
-
Android Tablet Struggles with Focus: A member queried how to use fooocus on an Android tablet, seeking guidance from the community.
-
Professional Diffusers Offer Their Services: A member with expertise in web design, MVPs, app development, and various technical skills including Stable Diffusion and Computer Vision offered their services for startups and enterprises.
-
The Forbidden Model Access: A user faced a 403 error while trying to download a model using vespa and sought assistance from the community to resolve it.
-
Trouble Loading the StoryGen Model: A member encountered an issue loading the haoningwu/StoryGen model using the DiffusionPipeline due to a problem with the config json, and reached out for support, specifically tagging another user for help.
-
Debate on AI-generated Video for âAI Horseâ: A user asked if itâs possible to create a 1-minute video on the topic of âAI Horseâ entirely with Diffusion, prompting another member to suggest using pika or some other form of Diffusion Transformer for the task.
Modular (Mojo đ„) â· #general (77 messagesđ„đ„):
- Query on Mojoâs Reporting Issues and Newsletter Contributions: A member inquired about how to get issues assigned and whether articles could be submitted for the Mojo newsletter, with responses pointing out the process involves showing the ability to fix things and that newsletter contributions currently arenât a supported feature.
- Discussion on Assistive Technology Support in GTK: Members discussed the importance of good assistive technology support in applications, using GTK and the lack of certain features in it as an example. The value of such technologies was debated, but agreed upon as beneficial in gaining user traction.
- Mojo Docs Update Inquiry: A member asked if the documentation on docs.modular.com is auto-generated from
mojo doc
; the reply indicated that while it is, thereâs a lot of non-public CI involved, and that it isnât designed for public use yet. - Performance Comparison between Mojo and Python: A comparison raised by a member between Mojo and Python in printing numbers speed led to a reference to a known issue about Mojoâs lack of buffered IO and advice on performance benchmarking, suggesting the issue remains unaddressed since December.
- Docs.modular.com Display Bug at 995px Width: Members reported and discussed a UI bug on the docs.modular.com site where search results fail to display at certain browser widths. A dialogue with a developer revealed that this is a known behavior that occurs at a width of 995px and could be circumvented by avoiding use at that specific width or closing the search to view content.
Links mentioned:
- no title found: no description found
- Issues · modularml/mojo: The Mojo Programming Language. Contribute to modularml/mojo development by creating an account on GitHub.
- GitHub - basalt-org/basalt: A Machine Learning framework from scratch in Pure Mojo đ„: A Machine Learning framework from scratch in Pure Mojo đ„ - basalt-org/basalt
Modular (Mojo đ„) â· #đŹïž±twitter (6 messages):
- Teaser Alert: Modular shared a mysterious teaser tweet, hinting at something brewing in the horizon.
- Anticipation Builds with Modular: A second tweet by Modular raises expectations among followers, suggesting an imminent reveal.
- Countdown to Excitement: The suspense continues with Modularâs third teaser tweet, pointing to a significant announcement.
- Momentum Gathers at Modular: In a fourth tweet, Modular keeps the community on the edge of their seats, with an apparent countdown.
- The Final Tease: Modularâs final tweet in the series leaves followers eagerly waiting for a big revelation.
Modular (Mojo đ„) â· #ai (3 messages):
-
Seeking Engagement for AI Video: A member shared a YouTube video titled âThe Rise of AIâ for a college assignment and asked for engagement and feedback. They acknowledged the limitations of the content depth due to time constraints and mentioned that English is not their first language.
-
The Quest for Artificial Conscious Life: A member expressed interest in double majoring in computational physics and computer science/engineering with the aim to create artificial conscious life. They questioned the current state of AI, inefficiency in power and data, and the potential need for advancements like quantum computing or ternary systems to achieve this goal.
-
Skeptical View on Quantum Computing for AI: Discussing the employment of quantum computing in AI, a member pointed out the challenges of randomness and efficiency in quantum systems, referencing the difficulty of performing simple operations with consistency. Concerns were also voiced about government intervention potentially impeding progress in this domain.
-
Ternary Computing Mentioned in AI Development: A brief mention of a ternary computing system, the Setun computer, was made in relation to discussing advancements necessary for developing artificial general intelligence (AGI). The member argued that computational architecture is more crucial than mere scaling in computing for progress towards AGI.
Links mentioned:
- Setun - Wikipedia: no description found
- The Rise of AI: (Hidupkan Closed Caption)(Turn on the Closed Caption)Bergabunglah bersama kami dalam perjalanan melalui evolusi cepat Artificial Intelligence, mulai dari kem...
Modular (Mojo đ„) â· #đ„mojo (338 messagesđ„đ„):
-
Exploring Type State Patterns in Mojo: A user inquired about implementing the Type State Pattern in Mojo, and another member shared associated types in traits as a potential solution. However, this feature seems to be not yet implemented in stable Mojo, but it might work with a workaround using
Size
trait with_getitem
and_setitem
. -
Understanding Mojo Parameters and Arguments: One user clarified the difference between parameters and arguments in Mojo - parameters are compile-time constants, while arguments are runtime values. The confusion arose during a discussion about sorting algorithms, where a snippet using
T:Sortable
trait with acmp_fn
function parameter was shared, prompting exploration into function parameters represented in square brackets. -
Sorting With Traits Strategy: Another member shared an example quicksort implementation using traits and provided feedback on enhancing it. Despite the code running into a â
T' does not implement the '__ge__'' method error, discussions included using
UnsafePointerinstead of
Pointerand understanding that a
Sortabletrait with overloaded comparison operators (
leand
ge`) can be useful for sorting custom data types. -
Issues with Pointers and Lists: There were discussions about a segmentation fault caused when trying to utilize strings with pointers. Users discussed potential causes such as misallocations or the use of value semantics leading to unexpected behaviors, highlighting the intricacies of memory management in Mojo.
-
Regex Functionality and Mojo Implementation: A user pondered the implementation of regex functionality in Mojo, sharing a Python example for context, and noted that as of the channel history cut-off, there is no regex implementation in Mojo. They expressed an intention to attempt a basic form of regex for a project idea.
Links mentioned:
- equality_comparable | Modular Docs: EqualityComparable
- collections | Modular Docs: Implements the collections package.
- sort | Modular Docs: Implements sorting functions.
- Sorting Techniques: Author, Andrew Dalke and Raymond Hettinger,. Python lists have a built-in list.sort() method that modifies the list in-place. There is also a sorted() built-in function that builds a new sorted lis...
- Generic Quicksort: Context Mojo Reference: Sort Mojo Version: 24.2.1 Demo: Sorting a Group of People by Age This demo showcases how to sort a group of people based on their age using a versatile QuickSort algorithm. Thi...
- Python -c command line execution method - Programmer Sought: no description found
- Traits | Modular Docs: Define shared behavior for types.
- simd | Modular Docs: Implements SIMD struct.
- playground.mojo: GitHub Gist: instantly share code, notes, and snippets.
- unsafe | Modular Docs: Implements classes for working with unsafe pointers.
- mojo/stdlib/src/builtin/anytype.mojo at main · modularml/mojo: The Mojo Programming Language. Contribute to modularml/mojo development by creating an account on GitHub.
- Are we web yet? Yes, and it's freaking fast! : no description found
- Parameterization: compile-time metaprogramming | Modular Docs: An introduction to parameters and compile-time metaprogramming.
- Ron Swanson Parks And Rec GIF - Ron Swanson Parks And Rec Its So Beautiful - Discover & Share GIFs: Click to view the GIF
- Issues · modularml/mojo: The Mojo Programming Language. Contribute to modularml/mojo development by creating an account on GitHub.
- mojo_zlib_classification/tools/utils.mojo at master · toiletsandpaper/mojo_zlib_classification: Contribute to toiletsandpaper/mojo_zlib_classification development by creating an account on GitHub.
- [Feature Request] `.__doc__` attribute · Issue #2197 · modularml/mojo: Review Mojo's priorities I have read the roadmap and priorities and I believe this request falls within the priorities. What is your request? I would like to be able to get the doctsring of my str...
- Issues · modularml/mojo: The Mojo Programming Language. Contribute to modularml/mojo development by creating an account on GitHub.
- MLIR: no description found
- 2023 LLVM Dev Mtg - MLIR Is Not an ML Compiler, and Other Common Misconceptions: 2023 LLVM Developers' Meetinghttps://llvm.org/devmtg/2023-10------MLIR Is Not an ML Compiler, and Other Common MisconceptionsSpeaker: Alex Zinenko------Slide...
Modular (Mojo đ„) â· #community-projects (35 messagesđ„):
-
Cryptic Llama Project Enigma: Interest was expressed in building a project cryptically referenced as âđŠđŠđŠ.đ„â, with suggestions towards an office suite with illustrative capabilities using text as prompts.
-
Mojo Projects Galore: Project updates included
prism
âs typed flags,mog
for terminal styling,gojo
emulating Goâsnet
package, and work ontermios
for MacOS, all available on GitHub with nightly tuple updates required. (Prism, Mog, Gojo, Termios) -
Basalt Framework Seeks Web Devs: The Basalt machine learning framework team is seeking Web Development expertise, especially in UI/UX with NextJS and ShadCN knowledge, for launching and enhancing their autogenerated documentation. Visit Basaltâs GitHub for details.
-
Mojo and the World of JSX: A request was made to create an LSX.mojo repository for a React-like development built on HTML syntax, suggesting a strong interest in component-based UI frameworks within Mojo. The idea of a Mojo static site generator was hinted upon, with a Djot parser in development. (LSX Repo)
-
MoCodes Breaks into Error Correction: The MoCodes project was shared, which is an Error Correction (De)Coding framework written in Mojo. It aims to optimize compute-intensive error correction code processes traditionally handled by dedicated hardware. Collaboration is sought as outlined in the README on GitHub.
Links mentioned:
- GitHub - basalt-org/basalt: A Machine Learning framework from scratch in Pure Mojo đ„: A Machine Learning framework from scratch in Pure Mojo đ„ - basalt-org/basalt
- GitHub - thatstoasty/prism: Mojo CLI Library modeled after Cobra.: Mojo CLI Library modeled after Cobra. Contribute to thatstoasty/prism development by creating an account on GitHub.
- GitHub - thatstoasty/mog: Style definitions for nice terminal layouts.: Style definitions for nice terminal layouts. Contribute to thatstoasty/mog development by creating an account on GitHub.
- GitHub - thatstoasty/gojo: Experiments in porting over Golang stdlib into Mojo.: Experiments in porting over Golang stdlib into Mojo. - thatstoasty/gojo
- GitHub - thatstoasty/termios: Mojo termios via libc: Mojo termios via libc. Contribute to thatstoasty/termios development by creating an account on GitHub.
Modular (Mojo đ„) â· #performance-and-benchmarks (19 messagesđ„):
-
Exploring Performance with CPU Limits: In a test limiting the CPU to 1400 MHz, Mojo scalar performed at 1.4 ns per item, while Rust and Mojo SIMD were similar at about 1.0 ns per item, even after including debug prints before and after the timed section.
-
Seeking the Optimal Parallelize Strategy: A member noted differences in the use of
parallelize
in X thread demo andMatmul
documentation, with the latter specifyingnum_workers
in contrast to the former. Performance variability and a lack of stability were reported when not explicitly setting the number of workers. -
The Multithreading Conundrum: Members discussed the complexity and best practices for setting the number of workers in multithreading. It was highlighted that multithreading performance varies based on the number of cores, the problem at hand, and whether itâs acceptable for the program to saturate all resources.
-
Number of Workers: To Specify or Not?: Another member echoed this sentiment, emphasizing the challenges and considerations in multithreading, and suggesting that sometimes setting the number of workers higher than the number of cores can be beneficial, as demonstrated in a Modular blog post about Matmul.
-
Performance Puzzles in Random Number Generation: A member posted a Mojo script for calculating pi via the Monte Carlo method, noting it was much slower than a Numba-jitted Python version, with a large portion of time spent generating random numbers. Following a recommendation to report this issue, an issue was opened on GitHub to address
random.random_float64
performance.
Links mentioned:
- Modular: Mojođ„ - A journey to 68,000x speedup over Python - Part 3: We are building a next-generation AI developer platform for the world. Check out our latest post: Mojođ„ - A journey to 68,000x speedup over Python - Part 3
- The Dos and Don'ts of Multithreading : Hubert Matthews describes some of the problems encountered in multithreading and discusses how to avoid them through appropriate design choices.
- functional | Modular Docs: Implements higher-order functions.
- [BUG] `random.random_float64` is extremely slow · Issue #2388 · modularml/mojo: Bug description Generating one random number at a time in a for loop is extremely slow, almost 2 orders of magnitude slower than a numba-jitted equivalent. Context: I tried to use a simple Monte Ca...
Modular (Mojo đ„) â· #đengine (24 messagesđ„):
- C++ ORT Performance Queries: One member was curious about how performance was being measured for C++ with ONNX Runtime (ORT) as compared to Mojo. They discussed Pythonâs overhead and considered whether C++ inherently optimizes due to fewer Python API calls.
- Image Processing in Python vs. C++: Another discussion revolved around pre-processing images in Python/Mojo using numpy and cv2 versus C++ using its native OpenCV and custom functions. It was noted that post-processing is primarily executed with native code in both languages.
- Benchmark Sharing Offer: One member mentioned they conducted performance benchmarks across three languages and offered to share a comparative table of the results.
- ONNX Model Input Dilemma Solved: A member faced an issue with an ONNX model accepting an input tensor named âinput.1â and sought a workaround for using it with the
model.execute
call. A solution using PythonObject and the alternative approach using kwargs in Python were provided.
Modular (Mojo đ„) â· #nightly (36 messagesđ„):
-
Pointer Conundrum and Unsafe Adventures: The community discussed the semantics of various pointer types, with suggestions to prefix some with âUnsafeâ to reflect their nature. Thereâs also work underway to phase out
LegacyPointer
, and contributions are encouraged as seen in a small PR aimed at this effort. -
Troubleshooting the Update Snags: A user highlighted an issue with the recent update to Mojo version 2024.4.1618, where
SIMDType.to_int()
was causing build failures. It was clarified that the method has been replaced with a simpleint(...)
call following the update. -
Taking on String Comparisons: A snippet of code was proposed for implementing String comparisons with an eye out for future Unicode considerations, prompting a review of a previous PR that addressed similar concerns.
-
Tuple Copy Mystery and UnsafePointers: A question was raised about the use of
__get_address_as_owned_value
in tuple copying operations, suggesting a possible conflict with how newUnsafePointer
types should handle references and lifetimes. -
String Representations and Semantic Conundrums: The distinction between
String()
andString("")
, where the latter includes a null terminator, prompted discussions about their proper allocation behaviors and the philosophical implications of what constitutes an empty string.
Links mentioned:
- [Feature Request] Explicit parametric alias with default argument · Issue #1904 · modularml/mojo: Review Mojo's priorities I have read the roadmap and priorities and I believe this request falls within the priorities. What is your request? As title. What is your motivation for this change? Exp...
- [stdlib] Replace `Pointer` by `UnsafePointer` in `stdlib/src/builtin/object.mojo` by gabrieldemarmiesse · Pull Request #2365 · modularml/mojo: Builtins imports behave in a weird way, I had to import LegacyPointer in stdlib/src/python/_cpython.mojo, I have no explanation for this. I just import what the compiler asks me to import :p See ht...
OpenAccess AI Collective (axolotl) â· #general (462 messagesđ„đ„đ„):
-
LLaMa 3 Tokenizer Troubles: Members in the discord discussed issues with fine-tuning LLaMa 3 models, highlighting problems with BOS (beginning-of-sentence) tokens not being added as they should be. A workaround involved manually updating
tokenizer.json
using a Pull Request found in Llama HF discussions which fixed the issue. -
GPUs and Training Time Revelations: Conversation sparked around the high resource expenditure for training AI models, especially upon the release of the Phi-3 models. One member noted a setup of 512 H100-80G GPUs for 7 days, indicating the large scale of computing power required.
-
Phi-3 Surpasses Expectations: Comparisons in the channel showed that even though Phi-3 models are relatively smaller in parameters (around 3.8b), they are demonstrating performance competitive with much larger models, leading to speculation and interest in their efficiency and potential.
-
OpenAI and the AI Race: Members discussed OpenAIâs silence amidst rapidly evolving AI model releases from competitors. Speculation included OpenAIâs focus on the release of GPT-5 in 2025 and the potential for current models to influence or accelerate those plans.
-
Phi-3 Licensing and Capabilities: The open MIT license of the Phi series was highlighted as a significant advantage, despite the modelsâ lack of extensive knowledge databases. Conversation suggested the models might excel at reasoning over memory, positioning them as an exciting option for future application integration.
Links mentioned:
- Discord - A New Way to Chat with Friends & Communities: Discord is the easiest way to communicate over voice, video, and text. Chat, hang out, and stay close with your friends and communities.
- Axolotl - Instruction Tuning: no description found
- Axolotl - Dataset Formats: no description found
- chargoddard/llama3-42b-v0 · Hugging Face: no description found
- microsoft/Phi-3-mini-128k-instruct · Hugging Face: no description found
- Environment variables: no description found
- mattshumer/Llama-3-8B-16K · Hugging Face: no description found
- Reddit - Dive into anything: no description found
- Reddit - Dive into anything: no description found
- cognitivecomputations/dolphin-2.9-llama3-8b · Llama 3 Base Is Unique: no description found
- Efficiently fine-tune Llama 3 with PyTorch FSDP and Q-Lora: Learn how to fine-tune Llama 3 70b with PyTorch FSDP and Q-Lora using Hugging Face TRL, Transformers, PEFT and Datasets.
- meta-llama/Meta-Llama-3-8B · Update post-processor to add bos: no description found
- GitHub - janphilippfranken/sami: Self-Supervised Alignment with Mutual Information: Self-Supervised Alignment with Mutual Information. Contribute to janphilippfranken/sami development by creating an account on GitHub.
- meta-llama/Meta-Llama-3-8B-Instruct · Hugging Face: no description found
- Reddit - Dive into anything: no description found
OpenAccess AI Collective (axolotl) â· #axolotl-dev (19 messagesđ„):
- GPU Struggles with 8-bit Optimizers: A member remarks that multi-GPU setups are necessary but points out issues with 8-bit optimizers not working as intended.
- VRAM Voracious AdamW_Torch: AdamW_Torch optimizer is identified as a VRAM-heavy alternative given the subpar performance of 8-bit optimizers.
- Seeking Configurations for 8b Optimizer: Members are requesting and sharing example configurations for 8-bit optimizers on models like LLaMA3.
- Troubleshooting Discord Links: Members are attempting to share Discord links, but facing issues with them not working as expected.
- Subjective Improvement Post Patch: After applying a patch to LLaMA3, members notice subjective improvements despite loss metrics remaining unchanged, with emphasis on the âvibes evalâ over loss data.
OpenAccess AI Collective (axolotl) â· #general-help (19 messagesđ„):
- QMD vs. Markdown: There was a query about the sudden switch to qmd for documentation, with concerns raised about its rendering on GitHub.
- Quantization Config Inquiry: A member inquired about the quantization configuration for a 70B model, and it was clarified that the config.json from âexamples/quantize.pyâ is commonly used.
- Merging Model Duration Concern: Discussion on the time it takes to merge back LoRA to base after fine-tuning a 70B model on 4 A100s; over one and a half hours was considered long by a member.
- Conversational Dataset Clarification: A question about whether âtrain_on_inputsâ affects labels in a multi-turn conversational dataset was confirmed; it particularly impacts user inputs.
- Dataset Types and Documentation: There was a request for information on types of datasets, and a member shared a comprehensive link detailing the dataset formats supported by Axolotl, including conversation, pre-training, instruction tuning, template-free, and custom pre-tokenized datasets.
Link mentioned: Axolotl - Dataset Formats: no description found
OpenAccess AI Collective (axolotl) â· #community-showcase (1 messages):
- Llamaâs Got Length: A link to Llama 3, a model with 16K token length, was shared accompanied by a seemingly impressed emoticon. The link leads to huggingface.co, indicating a userâs interest in the extended-length capabilities.
Link mentioned: mattshumer/Llama-3-8B-16K · Hugging Face: no description found
OpenAccess AI Collective (axolotl) â· #runpod-help (1 messages):
duh_kola: not axolotl related but yeah i canlt uplaod shit to hub using runpod
OpenAccess AI Collective (axolotl) â· #axolotl-phorm-bot (22 messagesđ„):
-
Clarification on YAML Config âconversation:â Key: A member inquired about the
"conversation:"
key for training datasets in the YAML config file. Another member clarified that this only applies to datasets of typesharegpt
. -
Complications with âsharegptâ and âchatmlâ: When a member asked about the effects of specifying
"type: sharegpt"
and"conversation: chatml"
, they were informed that this signifies the dataset is in ShareGPT format and instructs data transformation into ChatML format for model training. -
Error Troubleshooting Steps Suggested: Following a memberâs report of multiple
SIGBUS
(Signal 7) errors during distributed computing, they are advised to check for memory alignment issues, review memory-mapped file usage, check hardware, update dependencies, and simplify their setup to diagnose the problem. -
Guide on Using Unsloth with Axolotl: A question about integrating Unsloth into Axolotl for training culminated in a brief guide, instructing to install dependencies, prepare the model and data, configure Unsloth with the correct parameters, run the training process, and monitor outcomes for efficient optimization.
Links mentioned:
- OpenAccess-AI-Collective/axolotl | Phorm AI Code Search: Understand code, faster.
- OpenAccess-AI-Collective/axolotl | Phorm AI Code Search: Understand code, faster.
- OpenAccess-AI-Collective/axolotl | Phorm AI Code Search: Understand code, faster.
- OpenAccess-AI-Collective/axolotl | Phorm AI Code Search: Understand code, faster.
OpenRouter (Alex Atallah) â· #announcements (7 messages):
-
Load Balancer Optimizations In Progress: Traffic on Wizard 8x22b is causing performance hits, but load balancer adjustments are expected to improve latencies soon.
-
Improved Throughput for Requests: Changes to the load balancer and fixes related to stop tokens handling should enhance non-stream request throughput.
-
Deletion of Nitro Instruct Model: Requests to Databricks: DBRX 132B Instruct (nitro) will now be rerouted to the main Databricks: DBRX 132B Instruct model.
-
Introducing New Models and Extended Context Support: OpenRouter announces 3 new models including a free Llama 3 finetune, as well as the extension of Llama 3 8B to a 16k context. Alongside model launches, improvements in prompt formatting and region-specific networking issues are also being tackled, with a focus on enhancing dynamic routing. Model discussions and details can be found here.
-
MythoMax 13B Issue Resolution: Users experiencing problems with MythoMax 13B outputs should see improvements following a mitigation of issues by the top provider. Concerns can be reported in the provided discussion thread.
-
Addressing Spike in 504 Errors: Users are experiencing 504 errors due to networking issues in the central and west US regions, affecting Llama 2 tokenizer models. A fix that removes dependency on Hugging Face, which is currently down, is under development.
Links mentioned:
- Databricks: DBRX 132B Instruct by databricks | OpenRouter: DBRX is a new open source large language model developed by Databricks. At 132B, it outperforms existing open source LLMs like Llama 2 70B and Mixtral-8x7B on standard industry benchmarks for language...
- Lynn: Llama 3 Soliloquy 8B by lynn | OpenRouter: Soliloquy-L3 is a fast, highly capable roleplaying model designed for immersive, dynamic experiences. Trained on over 250 million tokens of roleplaying data, Soliloquy-L3 has a vast knowledge base, ri...
- Fimbulvetr 11B v2 by sao10k | OpenRouter: Creative writing model, routed with permission. It's fast, it keeps the conversation going, and it stays in character. If you submit a raw prompt, you can use Alpaca or Vicuna formats.
- Meta: Llama 3 8B Instruct (extended) by meta-llama | OpenRouter: Meta's latest class of model (Llama 3) launched with a variety of sizes & flavors. This 8B instruct-tuned version was optimized for high quality dialogue usecases. It has demonstrated strong...
OpenRouter (Alex Atallah) â· #app-showcase (3 messages):
-
Contract Standards Awareness Suggestion: A product feedback suggested that users should be prompted to choose the contract standard during upload to ensure awareness that only specific contract types are supported. This may prevent confusion over non-processed, non-supported contracts.
-
User Localization and Contract Favorability Feature Ideas: Another suggestion was proposed to allow users to set their location during onboarding or upload to account for local laws, and to enable a feature indicating which party the user wants to favor in the negotiation process.
-
Illegal Terms Detection Feature Request: It was also recommended that the product should have the ability to detect illegal and onerous terms within contracts to prevent dead contracts caused by the inclusion of illegal terms by non-lawyers.
-
Keywords AI: A Tool for Developers Built on OpenRouter: An announcement for Keywords AI, a platform supporting OpenRouter including all models and the âbring your own keyâ option, was made, highlighting its two-line integration and developer-centric features.
-
DeepGaze Launch with Reddit Monitoring: The launch of DeepGaze, a service that feeds multiple document types into GPT-4V and uses a Discord bot to identify Reddit users with issues matching its capabilities, was shared. DeepGaze leverages OpenRouter to keep up with the latest LLM models.
Links mentioned:
- no title found: no description found
- DeepGaze: no description found
OpenRouter (Alex Atallah) â· #general (474 messagesđ„đ„đ„):
-
More Woes with WizardLM-2: Users report inconsistent performance with WizardLM-2; some finding success while others encounter incoherence or non-responsiveness. One user identified SillyTavernâs âAssistant Prefillâ potentially causing issues with LLaMA 3 models, while another discussed difficulties stemming from Mircosoftâs billing system only showing one invoice.
-
ORâs Response to Technical Glitches: OpenRouter acknowledges issues related to provider tokenizers. A hotfix was deployed to address Hugging Face-related downtime, with a promise of a permanent fix that eliminates the dependency.
-
Rates and Tokenomics Scrutinized: Users question how AI model providers can afford to offer services at current rates, especially when compared to the costs of image generation. Discussions mention the possible role of FP8 quantization and active worker discounts in reducing expenses, with one user citing Groqâs hardware as potentially less economical due to high energy consumption.
-
Exploring Uncharted Model Territories: Members share their experiences and inquiries about a range of topics, including Phi-3-mini models, new LLaMA 3 70b variants, and WizardLM-2âs possible connections with Microsoft. Enthusiasts are eager to get their hands on the newly released models, while others speculate on RWKVâs future and compare AI writing styles.
-
Anticipating Model Updates and Additions: OpenRouter users await uncensored versions of LLaMA 3 70b, discuss the significance of jailbreakable models, and ponder the potential arrival of Phi-3 on the platform. They also note preferences for the 8x22 models, emphasizing the balance between cost and functionality.
Links mentioned:
- imgur.com: Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and ...
- Groq Inference Tokenomics: Speed, But At What Cost?: Faster than Nvidia? Dissecting the economics
- openlynn/Llama-3-Soliloquy-8B · Hugging Face: no description found
- microsoft/Phi-3-mini-128k-instruct-onnx · Hugging Face: no description found
- Work-to-rule - Wikipedia: no description found
- Tweet from Eric Hartford (@erhartford): Dolphin-2.9-llama3-8b generously sponsored by @CrusoeCloud ETA Saturday. Lots of collaboration with @LucasAtkins7 and @FernandoNetoAi. Dolphin-2.9-llama3-70b to follow. Dolphin-2.9-mixtral-8x22b stil...
- @WizardLM on Hugging Face: "đ„đ„đ„ Introducing WizardLM-2!
đRelease Blog:âŠâ: no description found
- dreamgen/opus-v1.2-llama-3-8b · Hugging Face: no description found
- OpenRouter: A router for LLMs and other AI models
- OpenRouter: A router for LLMs and other AI models
- FireAttention â Serving Open Source Models 4x faster than vLLM by quantizing with ~no tradeoffs: Serving Open Source Models 4x faster than vLLM by quantizing with ~no tradeoffs
- microsoft/Phi-3-mini-4k-instruct · Hugging Face: no description found
- Meta: Llama 3 70B Instruct (nitro) by meta-llama | OpenRouter: Metaâs latest class of model (Llama 3) launched with a variety of sizes & flavors. This 70B instruct-tuned version was optimized for high quality dialogue usecases. It has demonstrated stronâŠ
- Lynn: Llama 3 Soliloquy 8B by lynn | OpenRouter: Soliloquy-L3 is a fast, highly capable roleplaying model designed for immersive, dynamic experiences. Trained on over 250 million tokens of roleplaying data, Soliloquy-L3 has a vast knowledge base, riâŠ
OpenAI â· #ai-discussions (303 messagesđ„đ„):
- Atlas Robot Creeps into Discussions: The latest release of the Atlas robot spurred conversations about its perceived creepiness and social media buzz strategy, anticipating the model intended for sale, with one member looking forward to seeing its eventual capabilities.
- The AI Spirituality Debate: A member asked what a form of AI spirituality might look like, leading to a heated debate about consciousness, humanity, and emotions in AI, moderated by rule enforcement on non-secular discussions.
- GPT-3âs API and Interface Innovations: Discussion touched on the potential of creating APIs with MyGPTâs code and the advances in tools like MetaGPT and Devika, which help write apps and might interact with GitHub.
- LLaMa 3âs Importance and Limitations: Members discussed the recent improvements in various AI models, with LLaMa 3 earning mixed reviews for its performance, and rumored release dates of GPT-5 considered fake without official announcements.
- Generative Model Literature and Exuberant AI: A request for in-depth resources on AI and generative algorithms like ChatGPT and DALL-E was met with suggestions to search OpenAIâs published papers and repositories like Arxiv, while an anecdote on LLaMa 3âs unusual outputâoverusing exclamation marksâhighlighted both the unexpected quirks and perceived limitations of the model.
Links mentioned:
- Joe Bereta Source Fed GIF - Joe Bereta Source Fed Micdrop - Discover & Share GIFs: Click to view the GIF
- Biorobotics - Wikipedia: no description found
- Generative models: This post describes four projects that share a common theme of enhancing or using generative models, a branch of unsupervised learning techniques in machine learning. In addition to describing our wor...
- Research: We believe our research will eventually lead to artificial general intelligence, a system that can solve human-level problems. Building safe and beneficial AGI is our mission.
- GPT-4: Weâve created GPT-4, the latest milestone in OpenAIâs effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less ca...
OpenAI â· #gpt-4-discussions (33 messagesđ„):
-
GPT Agent and LLama 3 70B Integration Attempt: A member shared their attempt at integrating Agent GPT v2 with LLama 3 70B using Groq but faced issues as others reported the integration was failing. However, some users did eventually find it operational, suggesting there might be intermittent access or user-specific conditions affecting functionality.
-
Caution Against Sharing CGPT Chats: Concerns were raised about posting share URLs from cgpt chats, with members being cautious about sharing logs due to access and evaluation queries regarding the improvement of model responses without explicit feedback.
-
Exploring Convolutional Layers and LoRa in LLMs: A discussion was held around whether convolutional layers, referred to as Hyena, are comparable to LoRa layers in other models like Stable Diffusion. One member provided insight that LoRa can be used for fine-tuning large language models (LLMs) with others inquiring about models actively employing these techniques and their benefits.
-
Tools for Managing ChatGPT History Needed: Users are seeking tools or alternative websites to better manage their ChatGPT history, highlighting the limitations of the current portal offered by OpenAI. Attention was directed towards the potential necessity of an API key for any third-party management solutions.
-
Clarification on Fine-tuning and File Retention with ChatGPT: A user was informed that fine-tuning on ChatGPT refers to feeding content through the API to change model behavior, and that documents uploaded act only as reference material which do not alter the underlying model. Additionally, it was pointed out that attached files to a chat are retained per existing OpenAI guidelines, with a user mentioning a 3-hour retention period based on prior conditions.
OpenAI â· #prompt-engineering (24 messagesđ„):
- Brevity is Key in Custom Instructions: Users discussed optimal length for custom instructions in ChatGPT; one user opts for minimal guidance to save context space, while others experiment with length, finding that too long instructions may be counterproductive, as AI might âforgetâ them.
- Seeking Criminal Law Prompts: A law student inquires about prompts for criminal law, but the request remains open for suggestions or tips from the community.
- Optimizing Email Enhancement with GPT-4: A user is fine-tuning a program to enhance emails using GPT-4, asking for advice on how to improve the prompts when the AIâs outputs are not satisfying.
- Whereâs the Prompt Library?: A member of the channel inquired about the location of a prompt library, a resource that could potentially aid in developing more effective prompts.
- Prompt Engineering Tips and Ethics: A discussion emerges on the practice of prompt engineering, touching on the ethical implications and concerns of sharing potentially harmful techniques; however, no concrete techniques or examples are provided.
OpenAI â· #api-discussions (24 messagesđ„):
- Brief Custom Instructions Preferred: A user noted keeping custom instructions simple, such as Include semicolons, colons, and em dashes in your responses where applicable, to preserve context window space.
- Contemplating Instructionsâ Length and Quality: A discussion about the length of prompts indicated users perceive that sometimes a longer, more detailed prompt does not necessarily yield higher quality responses from the AI, suggesting shorter prompts might be preferable.
- Exploring Prompt Division Strategies: In response to uncertainty about how to handle large prompts, one member advised breaking them down and spreading them over multiple messages to prevent the AI from forgetting previous parts.
- Prompting Techniques and Personalities: A user shared admiration for a prompt engineer named RageGPTee, whoâs known for advanced techniques and âdisappearingâ after sharing groundbreaking skills, yet another person humorously exaggerated his capabilities.
- Email Enhancement via GPT-4 Queries: A member is seeking advice on optimizing prompts for a program that uses GPT-4 to enhance email drafting, following occasional subpar outputs from the AI.
LAION â· #general (298 messagesđ„đ„):
-
LLM Multimodal Concerns: The channel participants discussed that existing multimodal datasets, which total around 2 million pairs, can cause overfitting of models on specific datasets like GPT-4v captions for LAION-COCO. This overfitting is a noted problem in current multimodal approaches.
-
MoA Architecture Unveiled: A new architecture called Mixture-of-Attention (MoA) was shared, described in this paper, which allows disentanglement of subject and context generation in personalized image generation.
-
AI Surveillance Bots on Discord: Concerns about surveillance bots joining Discord servers were discussed, with a link provided to kickthespy.pet, a service that identifies such bots using an API vulnerability.
-
Discussion on Training Text-Image Diffusion Models: Users exchanged insights about the challenges of training text-image diffusion models, emphasizing the importance of data quality, size, and model architecture. An interesting point made was that while Chinchillaâs training method isnât detailed, dropout and other regularization methods might significantly impact training outcomes.
-
Adobe Unleashes Firefly Image 3: Adobe announced the beta release of Adobe Firefly Image 3 Foundation Model, which offers improved image generation quality and speed, now integrated into Photoshop and accessible through the Firefly web application. Users were curious to test its capabilities with different creative prompts.
Links mentioned:
- Kick the Spy Pet: no description found
- Adobe Introduces Firefly Image 3 Foundation Model to Take Creative Exploration and Ideation to New Heights: no description found
- Mixture of Attention: no description found
- Training Compute-Optimal Large Language Models: We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget. We find that current large language models are significantly undertra...
- Oh No Top Gear GIF - Oh No Top Gear Jeremy Clarkson - Discover & Share GIFs: Click to view the GIF
- bghira: Weights & Biases, developer tools for machine learning
- Papers with Code - CUB-200-2011 Dataset: The Caltech-UCSD Birds-200-2011 (CUB-200-2011) dataset is the most widely-used dataset for fine-grained visual categorization task. It contains 11,788 images of 200 subcategories belonging to birds, 5...
- The Rise of AI: (Hidupkan Closed Caption)(Turn on the Closed Caption)Bergabunglah bersama kami dalam perjalanan melalui evolusi cepat Artificial Intelligence, mulai dari kem...
- ptx0/mj-v52-redux at main: no description found
- How To Build Generative AI Models Like OpenAI's Sora: If you read articles about companies like OpenAI and Anthropic training foundation models, it would be natural to assume that if you donât have a billion dol...
- [AINews] FineWeb: 15T Tokens, 12 years of CommonCrawl (deduped and filtered, you're welcome): AI News for 4/19/2024-4/22/2024. We checked 6 subreddits and 364 Twitters and 27 Discords (395 channels, and 14973 messages) for you. Estimated reading time...
LAION â· #research (38 messagesđ„):
-
Benchmarking Blinkâs Visual Perception: A new benchmark named Blink has been introduced for testing multimodal language models (LLMs) to evaluate their visual perception abilities. It covers tasks that humans can solve quickly but are surprisingly challenging for advanced multimodal LLMs like GPT-4V and Gemini, where they perform marginally better than random guessing. Read more about Blink.
-
Upscaling Difficulties in Image Extrapolation: There is ongoing work in improving the results of 2D rope extrapolation from a 256x256 resolution to a 1024x1024, which currently does not yield impressive results and requires higher resolution tuning.
-
Piecewise-Rectified Flow Integrates with ControlNet-Tile Pipeline: Piecewise-Rectified Flow (PeRFlow) has been mentioned for upsampling images significantly, going from 64px to 1024px through a process that integrates the flow with the ControlNet-Tile pipeline and refines the images. This can be found on GitHubâs piecewise-rectified-flow.
-
HiDiffusion Enhances Diffusion Model Resolutions: HiDiffusion, a new development by MEGVII Technology and ByteDance, claims to increase the resolution and speed of diffusion models with a single line of code. The module displays artifacts in its outputs, raising questions about its efficacy in generating coherent high-resolution images. Explore the HiDiffusion project.
-
SEED-X Multimodal Foundation Model: SEED-X aims to bridge the gap in multimodal foundation models by comprehending images of arbitrary sizes and enabling multi-granularity image generation. The unified and versatile foundation model demonstrates effectiveness in real-world applications with multi-granularity visual semantics for comprehension and generation tasks.
Links mentioned:
- SEED-X: Multimodal Models with Unified Multi-granularity Comprehension and Generation: The rapid evolution of multimodal foundation model has demonstrated significant progresses in vision-language understanding and generation, e.g., our previous work SEED-LLaMA. However, there remains a...
- TextSquare: Scaling up Text-Centric Visual Instruction Tuning: Text-centric visual question answering (VQA) has made great strides with the development of Multimodal Large Language Models (MLLMs), yet open-source models still fall short of leading models like GPT...
- bghira: Weights & Biases, developer tools for machine learning
- piecewise-rectified-flow/README.md at main · magic-research/piecewise-rectified-flow: Contribute to magic-research/piecewise-rectified-flow development by creating an account on GitHub.
- bghira: Weights & Biases, developer tools for machine learning
- BLINK: Multimodal Large Language Models Can See but Not Perceive: We introduce Blink, a new benchmark for multimodal language models (LLMs) that focuses on core visual perception abilities not found in other evaluations. Most of the Blink tasks can be solved by huma...
- SOCIAL MEDIA TITLE TAG: SOCIAL MEDIA DESCRIPTION TAG TAG
- GitHub - megvii-research/HiDiffusion: Contribute to megvii-research/HiDiffusion development by creating an account on GitHub.
LAION â· #learning-ml (6 messages):
-
Coding Assistant Collaboration: A member mentioned they are starting to build an NLP coding assistant focused on JavaScript/Rust rather than Python and expressed interest in collaborating with others.
-
Time Constraints on Collaboration: softmax_function indicated a willingness to help occasionally with the project, citing a busy schedule with multiple projects.
-
In Search of Past Work: jcarbonnell inquired about the existence of a repository with previous work that could be useful for the NLP coding assistant project.
-
Admitting Past Limitations: softmax_function acknowledged discontinuing a previous project due to a lack of AI knowledge at the time, but noted an improved ability to contribute now.
-
Seeking Task Assignment Clarification: jcarbonnell expressed difficulty in assigning tasks without understanding softmax_functionâs past contributions, and intends to try a TrainedModel.py script and dataset shared by them.
LlamaIndex â· #blog (6 messages):
-
RAG Experimentation Gets a Makeover: Aishwarya Prabhat introduces a framework named DREAM for experimenting with Distributed RAG, highlighting the importance of a robust infrastructure for creating production-ready RAG systems. The details and insights are hosted on the LlamaIndex tweet.
-
Finance Bot Framework by LlamaIndex: Hanane Dupouy shares a mini-blog on how to use @llama_index to build a finance agent that can retrieve stock prices and summarize financial news, enhancing interactions with public company data. Further exploration can be found in the shared Twitter link.
-
ColBERT with a Memory Twist: Discussing the challenges in adding conversation history into a RAG pipeline, LlamaIndex proposes a retrieval agent powered by ColBERT that stores âstateâ for a conversational assistant. Learn more about this method in their recent tweet.
-
RAG Fine-Tuning with LoRA: Maribooâs tutorial is highlighted for demonstrating the use of LoRA weights to fine-tune embedding models, a critical part of the RAG pipeline, using @llama_index finetuning abstractions and @huggingface. Dive into the tutorial via LlamaIndexâs Twitter post.
-
Level-Up Your RAG with Open-Source Rerankers: @JinaAI_ releases two open-source rerankers that enhance RAG systems by applying a second level of ranking to vector search on embeddings. The details about these rerankers are shared in a tweet by LlamaIndex.
-
CRAG: Innovative Layer for RAG Retrieval: LlamaIndex discusses Corrective RAG (CRAG) which utilizes a âreflectionâ layer to categorize retrieved information as âCorrect,â âIncorrect,â or âAmbiguous,â addressing the issue of bad retrieval in RAG. Insights into CRAG are detailed in LlamaIndexâs tweet.
LlamaIndex â· #general (188 messagesđ„đ„):
-
Choosing the Right Retrieval Method: Users discussed different retrieval approaches such as RAG, CRAG, and retanking with Vector Databases vs. Knowledge Graphs. The consensus points towards use-case specificity, especially when dealing with company summaries where information loss is a concern, leading to preferences towards larger chunk sizes or using SQL and graph technologies.
-
Integration and Summarization Challenges: One member shared frustration over a bot that only replies with document-related responses after integrating ChainLit with LlamaIndex, hinting at context management issues within a Retriever-Answer Generator (RAG) system.
-
AI Models and OpenAI Dependence: Questions arose surrounding the use of alternative models like Groq, Bedrock, and Ollama within the llama_index infrastructure, with members resolving doubts related to API key errors and correct embedding model usage.
-
Indexing and Storage Explorations: Members inquire about the functionality and integration of Vector Stores such as Supabase, Chromadb, and Qdrant, often confronting warnings, bugs, or 401 errors that hint at a reliance on OpenAIâs API key even when not explicitly utilized.
-
Summarization Using DocumentSummaryIndex: One member sought advice on how to make DocumentSummaryIndex consider all nodes for summarization, as the tool only selected one node for summary generation out of several resulting from the document split process.
Links mentioned:
- Agents - LlamaIndex: no description found
- Migrating from ServiceContext to Settings - LlamaIndex: no description found
- Auto-Retrieval from a Weaviate Vector Database - LlamaIndex: no description found
- RAG CLI - LlamaIndex: no description found
- Token Counting Handler - LlamaIndex: no description found
- Bedrock - LlamaIndex: no description found
- Usage Pattern - LlamaIndex: no description found
- Ollama - Llama 2 7B - LlamaIndex: no description found
- LocalAI - LlamaIndex: no description found
- Portkey - LlamaIndex: no description found
- fix qdrant bug with checking existing collection by logan-markewich · Pull Request #13009 · run-llama/llama_index: Small bug with getting info from possibly existing collection
- Building an Agent around a Query Pipeline - LlamaIndex: no description found
- Indexing & Embedding - LlamaIndex: no description found
- CrewAI RAG using Tools - Mervin Praison: no description found
- Using Documents - LlamaIndex: no description found
- Pathway Reader - LlamaIndex: no description found
- Querying - LlamaIndex: no description found
- Starter Tutorial (Local Models) - LlamaIndex: no description found
- How to use UpTrain with LlamaIndex - LlamaIndex: no description found
LlamaIndex â· #ai-discussion (5 messages):
-
Infini Attention Explained: An explanation of the new Infini Attention technology was shared on LinkedIn, highlighting its potential and expressing anticipation for its upcoming implementations. Read the explainer on LinkedIn.
-
Comprehensive AI Funding Data Updated: A comprehensive dataset tracking AI funding and company distributions by city is now available for community review. Check out the dataset and related city distribution analysis via Google Sheets or the Tweet by @WangUWS on Twitter.
-
LLM-Ready Markdown Gets a Boost: LLM-ready Markdown experiences a new level of integration with FireCrawl and LlamaIndex. Read about the advancements on Medium.
-
Launching Schema-Controlled Knowledge Graphs: WhyHow.AI introduced a significant upgrade to their Knowledge Graph SDK, enabling the creation of schema-controlled automated knowledge graphs from PDFs. For insights and participation in the Beta program, refer to the announcement on Medium.
-
Debate on Optimal Databases for LLM Training: Thereâs an active conversation regarding what the ideal database type for LLM training might be, with questions raised about the suitability of relational, document, columnar databases, as well as the necessity of vector databases.
Links mentioned:
- [FrontierOptic.com] AI Raise Tracking - April 21 2024 - Community Review Copy: Cover <a href="http://FrontierOptic.com">FrontierOptic.com</a> AI Startup Fund Raise Data (Since May 2023) - Community Review Copy <a href="https://twitter.com/WangUWS&...
- Tweet from Howe Wang (@WangUWS): To celebrate 20 years since @HilaryDuff sang 'Could be New York, Maybe Hollywood and Vine, London, Paris, maybe Tokyo,' in 'Wake Up'. I cleaned up the AI Hype Train data's location...
OpenInterpreter â· #general (110 messagesđ„đ„):
- Exploring Open Interpreterâs Features and Integration: General discussions about Open Interpreter (OI) functionalities include questions about using the
--server
argument for building clients, challenges with OI on Windows systems, and issues with installing OI, linked to a specific GitHub issue. There was also a mention of successfully using OI with the LLM model Llama3 for Python tasks. - Model Compatibility and Performance: Users are discussing the performance of various models with OI, including the Llama3 70b, and one confirms it running well using the
--local
mode. Meanwhile, there were queries about the best text-to-speech services for live streaming and humanlike interaction. - AI Vision Model Clarifications: Itâs indicated that Open Interpreter uses GPT-4-vision-preview for recognizing screenshots. The model name was provided in response to a userâs inquiry about the LLM model used for vision tasks.
- Development Challenges and Solutions Shared: Users provided solutions for issues such as pytesseract errors and shared fixes, including the command
pip install --upgrade litellm
. Contributions to troubleshooting are also being streamed and shared on platforms like YouTube, with a video detailing how to integrate OI with GROQ API for potentially cheaper operations. - Community Collaboration and Development: The community is actively discussing contributions to OI, offering help to new users interested in hardware like Raspberry Pi, and sharing their setups. One user mentioned reaching 100 contributors on GitHub for OI, while another shared a GitHub pull request they authored. Thereâs also interest in sharing default configuration files to improve model interactions.
Links mentioned:
- Discord - A New Way to Chat with Friends & Communities: Discord is the easiest way to communicate over voice, video, and text. Chat, hang out, and stay close with your friends and communities.
- Que GIF - Que - Discover & Share GIFs: Click to view the GIF
- Tweet from Robert Scoble (@Scobleizer): #17: Making humans better with new AI The Rabbit AI device took the Consumer Electronics Show by storm in January, which inspired @hellokillian Killian Lucas, founder of Open Interpreter, Â to build a...
- â OS Control enabled> open notepad and write "hello" Let's start by try - Pastebin.com: Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
- Bug when fresh install and new start · Issue #1185 · OpenInterpreter/open-interpreter: Describe the bug when i run it. this warning shown interpreter /opt/conda/lib/python3.11/site-packages/pydantic/_internal/fields.py:151: UserWarning: Field "model_id" has conflict with prote...
- Tweet from Nik Shevchenko (@kodjima33): FRIEND became the largest opensource AI wearable community in the world To support the builders, we are launching an App Marketplace You can now build your own app and it will work with the device ...
- 01/project_management/hardware/devices/raspberry-pi at main · OpenInterpreter/01: The open-source language model computer. Contribute to OpenInterpreter/01 development by creating an account on GitHub.
- posts/llama3_new.pdf at main · ishank26/posts: resources, thoughts and notes. Contribute to ishank26/posts development by creating an account on GitHub.
- How to use Open Interpreter cheaper! (LM studio / groq / gpt3.5): Part 1 and intro: https://www.youtube.com/watch?v=5Lf8bCKa_dE0:00 - set up1:09 - default gpt-42:36 - fast mode / gpt-3.52:55 - local mode3:39 - LM Studio 5:5...
- Update local profile so it doen't use function calling by Notnaton · Pull Request #1213 · OpenInterpreter/open-interpreter: leaving model = gpt4 will result in function calling. Most LM Studio models dont use function calling. making it not work Describe the changes you have made: Reference any relevant issues (e.g. "...
- (oi) C:\Users\ivan>interpreter --api_base "https://api.groq.com/openai/v1" --api - Pastebin.com: Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
- GitHub - KoljaB/RealtimeTTS: Converts text to speech in realtime: Converts text to speech in realtime. Contribute to KoljaB/RealtimeTTS development by creating an account on GitHub.
- Jupyter export magic command by tyfiero · Pull Request #986 · OpenInterpreter/open-interpreter: Describe the changes you have made: Added a %jupyter magic command to export the current session as a jupyter notebook file, that you can run in Google Collab. Reference any relevant issues (e.g. &quo...
- Bump version of tiktoken by minamorl · Pull Request #1204 · OpenInterpreter/open-interpreter: Describe the changes you have made: Bumped version of tiktoken since build process is broken for some reason. This PR fixes broken process. Reference any relevant issues (e.g. "Fixes #000"):...
OpenInterpreter â· #O1 (22 messagesđ„):
- Mix-up with Model Names: One member stated they mistakenly said they got Open Interpreter working with Groq and Llama 3 70b, but they meant another similar-ish named service and clarified that 01 only supports OAI for the cloud option currently.
- Llama 3 Models Stability Issues: It was mentioned that Llama 3 70b seems more unstable compared to Llama 3 8b, though specific details about the instability were not provided.
- Windows Client Troubles: Several members are experiencing issues with 01 on Windows, with suggestions indicating there might be a client-related problem that needs addressing.
- Recording Woes on M1 Mac: Users reported an issue where pressing the spacebar on an M1 MacBook did not initiate recording in 01, but instead kept inputting spaces; various solutions were suggested, including installing ffmpeg, checking microphone and terminal permissions, or using a specific version of Python via conda.
- Cloud Compatibility Request: A member expressed interest in running 01 in the cloud, such as on brev.dev, asking about compatibility with cloud services like Scaleway, highlighting a need for cross-platform support.
Interconnects (Nathan Lambert) â· #ideas-and-feedback (39 messagesđ„):
-
The Quest for a Click-Worthy AGI Title: The channel explored various titillating titles for an article on AGI, aiming to strike a balance between clickbait and substance. Titles like âAGI Isnât real,â âAGI is religion, not science,â and âAGI is what you want it to beâ were debated.
-
The Importance of Audience Satisfaction: Nathan underscored the priority of serving current readers over attracting new ones, indicating that current Discord members would appreciate the content regardless of the titleâs click-worthiness.
-
Controversial Paper Discourse: A discussion took place addressing widespread criticism of the Sparks paper within the community, citing issues like irreproducibility and overhyped claims.
-
Debating AGIâs True Nature: The conversation touched upon beliefs about AGI, with some members suggesting itâs more a matter of faith than science. A Business Insider article was mentioned where Mistralâs CEO Arthur Mensch expressed skepticism about tech giantsâ portrayal of AGI.
-
Legal Spectacle on AGI Definition: Nathan found humor in the idea that a jury might have to determine the definition of AGI due to a clause between OpenAI and Microsoft, with a community member suggesting it could be used strategically by OpenAI to sever ties with Microsoft.
Link mentioned: AI CEO says peopleâs obsession with reaching artificial general intelligence is âabout creating Godâ: Arthur Mensch doesnât feel concerned about AI surpassing human intelligence, but he does worry about American tech giants dominating the field.
Interconnects (Nathan Lambert) â· #news (44 messagesđ„):
-
Phi Series Benchmarks Stir Debate: Tweets shared in the community highlight discussion on the impressive benchmark results of Phi-3, mentioning LLAMA 3 8B as a standout model and Phi-3 Mini (4b), Small (7b), and Medium (14b) as having significant benchmark improvements due to synthetic data pipelines. Concerns are raised regarding the use of benchmarks to evaluate models, suggesting that overfitting on benchmarks makes models like Phi-3 perform well in tests but poorly out-of-distribution (OOD).
-
Skepticism Surrounding Phi-3âs Validity: Users express suspicion about the integrity of Phi-3, with some characterizing it as âSUSâ and others critiquing it for mainly being comprised of textbooks, which could advantage it in benchmarks like MMLU without necessarily ensuring broad capabilities.
-
Phi-3 Evaluated as âClusterfuckâ: A conversation around Phi-3 criticizes the manner in which its evaluations are presented, pointing out the lack of disclosure about the data pipeline and questionable inclusion of a matplotlib plot as a JPEG in the documentation.
-
Insights on Training Data and GPU Priorities: The discussion sheds light on the possibility that a focus on smaller models could stem from GPU limitations at Microsoft Research (MSR), with comparisons made regarding GPU resource allocation between MSR and other teams or organizations such as OAI.
-
Phi-3 Anticipated Release and Multilingual Capability: Conversation anticipates Phi-3âs impending release under an MIT license and notes its multilingual capabilities, indicating a broader scope than previously recognized.
Links mentioned:
- Tweet from Sebastien Bubeck (@SebastienBubeck): @itsGauravAi Good thing that you will be able to try for yourself tomorrow :-).
- Tweet from Dylan Patel (@dylan522p): LLAMA 3 8B was amazing but will be overshadowed Phi-3 mini 4b, small 7b, medium 14b this week, and the benchmarks are fucking insane Synthetic data pipelines are massive improvements over internet dat...
- Tweet from Teortaxesâ¶ïž (@teortaxesTex): @angelusm0rt1s @fchollet It's my conviction that you can benchmark benchmarks by how well phi-2 does on them relative to some obviously capable models like Mixtral If phi-2 >> mixtral your ...
- Where Is GIF - Where Is My - Discover & Share GIFs: Click to view the GIF
- Tweet from near (@nearcyan): blacked out the irrelevant parts of the phi-3 paper to help everyone understand how it performs so well for its size
- Tweet from Susan Zhang (@suchenzang): oh no not this again
- Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone: We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of ...
Interconnects (Nathan Lambert) â· #ml-questions (9 messagesđ„):
- Evaluations Categorization in the Spotlight: A member discusses the Evals section of their research and touches on the immediate utility of automated evaluations like MMLU and BIGBench versus time-costly human evaluations like ChatBotArena.
- The Role of Perplexity-Based Evals: The same member questions the role of perplexity-based evaluations like AI2âs Paloma and how they compare to task-based evaluations such as MMLU. Thereâs uncertainty about whether Paloma was intended just for internal checks during training or as a broader public benchmark.
- Benchmark Categorization Approval: Both members express appreciation for a categorization of benchmarks from the MT Bench paper, indicating that it provides a helpful framework, even though the categorization of tools like Paloma isnât clear-cut.
- Utility of Multi-Dataset Perplexity-Based Metrics in Training: A member ponders if multi-dataset perplexity-based evaluations are more about monitoring model performance at training checkpoints rather than for post-completion model competitions. They seek confirmation on this understanding.
- Confirming Perplexityâs Role: Another member confirms that perplexity-based evaluations are indeed used as checkpoints during training, rather than as competitions for completed models, though it is a relatively new concept for them as well.
Interconnects (Nathan Lambert) â· #random (25 messagesđ„):
- Discordâs Hidden Gem: Despite having 13k free subscribers and 250 eligible for Discord, only about 50 have joined the channel, with plans to make its value more obvious through a quarterly shoutout, hinting at Ben Thompsonâs style.
- Peek into Deep Dives: A member shared their analysis of the âroadmap to pluralismâ paper, with feedback suggesting the topic is currently evergreen content and welcomes any thoughts on the Typefully draft.
- Community Engagement Differentials: Some members mention they enjoy lurking and reading the content shared in the channel, while another voices the challenge of following too many Discords.
- The Ephemeral Tweeter: One user is amused by a researcher (Ross Taylor, lead of Galactica) who posts interesting tweets and deletes them within seconds, positing that past negative feedback might lead to such fleeting digital presence.
- Candid Interviews Await NDA Clarity: The host expresses interest in interviewing Ross Taylor but also shows reluctance due to potential NDA restrictions that could prevent an open and informative discussion.
Link mentioned: no title found: no description found
Interconnects (Nathan Lambert) â· #memes (9 messagesđ„):
-
LLM Benchmarks Discussion: A link to a recent tweet discussing the current state of large language model (LLM) benchmarks was shared: current state of llm benchmarks.
-
Suspicious Activity Noted: A member mentioned being âsusâ, possibly implying suspicion or cautiousness within the context.
-
Itâs Live!: Members discussed the timing of an unnamed feature or service going live, clarifying that it happened an hour ago.
-
Model Updates on Hugging Face: It was noted that updates, including a 128k context length model, are now available on Hugging Face.
-
Search Web for Interesting Results: A member pointed out that enabling the search web feature could result in discovering information about an Australian politician sharing the name Nathan Lambert.
Link mentioned: Tweet from near (@nearcyan): current state of llm benchmarks
Interconnects (Nathan Lambert) â· #reads (5 messages):
- Instruction Tuning Gains Traction: A member highlighted an introductory blog post on instruction tuning and recent progress in the field. The post is appreciated for its breadth of references and narrative though itâs noted it could benefit from editing.
- Getting to Grips with CRINGE: The CRINGE loss paper, connected to instruction tuning, was shared which discusses a training method using negative examples to improve model performance. This method is detailed in a paper that focuses on avoiding issues like unsafe generation and contradictions.
- LLMBar in RewardBench Utilization Noted: It was mentioned by a member that LLMBar is used in RewardBench, a response to a query about similarity with another LLM-evaluator meta-benchmark.
- Endorsement for LLM-Evaluator Benchmark Tools: A comment was made expressing approval for the LLM-evaluator meta-benchmark, suggesting its utility.
Links mentioned:
- Teach Llamas to Talk: Recent Progress in Instruction Tuning: no description found
- The CRINGE Loss: Learning what language not to model: Standard language model training employs gold human documents or human-human interaction data, and treats all training data as positive examples. Growing evidence shows that even with very large amoun...
Cohere â· #general (71 messagesđ„đ„):
- Insights on Job Hunting for Engineers: A member shared concerns about the challenges of landing a job through traditional applications, highlighting that personal projects and a strong GitHub presence are more beneficial. They also discussed the somewhat surprising benefit of having big company names on resumes over actual work done when it comes to getting interviews and jobs.
- Web-Search for Academia: One user, a student of Homeric Studies, listed multiple academic websites, such as academia.edu and perseus.tufts.edu, that they use with a script for web-search purposes, demonstrating interest in connecting command-R to rich educational resources.
- Cohere Outreach Request: A user requested help with implementing Cohere Command-R with URL Grounding to BotPress for chat functionalities, expressing that many users might switch to Cohere given its performance and competitive pricing.
- Guidance on Cohereâs Chat API Capabilities: Questions arose on how to restrict a chat model to respond only within its training scope. Suggestions included using preambles and BOS/EOS tokens, with the goal of sharpening model outputs to specific topics.
- Meetup on Variational Autoencoders by ML-Maths: An upcoming talk by Dr. Matthew Bernstein on the mathematics behind VAEs and their applications in single-cell genomics was announced, inviting participants to learn about these deep, probabilistic models. The event underscores the communityâs interest in advanced ML topics.
Links mentioned:
- Ken's Resume.pdf: no description found
- Using Oracle Autonomous Database Serverless: Oracle Autonomous Database Select AI enables you to query your data using natural language.
Cohere â· #project-sharing (8 messagesđ„):
-
Open Source Announcement: A new matchmaking application using @cohere Command R+, @stanfordnlp DSPy, @weaviate_io Vector store, and @crewAIInc agents has been open-sourced. A video and GitHub links for the application were shared for exploration and feedback.
-
Challenges in Web Scraping Automation: A member is developing a generic web scraper that utilizes gpt-4-turbo to identify (selector, column) pairs but is facing difficulties with the model accurately finding and interacting with input elements for selection and clicking.
-
Prompt IDE Tool for Optimal Performance: Prompt Mixer, a desktop application for creating, evaluating, and utilizing AI prompts, was mentioned with a feature rundown. It offers functionalities such as automatic version control, AI recommendations, and the ability to test prompt chains. Details are available at Prompt Mixerâs website.
-
Request for Assistance with Cohere and BotPress: A user is seeking help to implement Cohere Command-r with URL Grounding (RAG) into BotPress. They conceptually endorse Cohere and provide context that many using ChatGPT in BotPress may switch if successfully integrated.
Links mentioned:
- Prompt Mixer. AI Development Studio for companies: A collaborative workspace for managers, engineers and data experts to develop AI features.
- Tweet from Anmol Desai (@anmol_desai2005): We did it. Finally the code is open sourced. Please give it a try and we are eager for a feedback. @weaviate_io @stanfordnlp @cohere @1vnzh @CShorten30 âïž Quoting Muratcan Koylan (@youraimarketer) ...
Cohere â· #collab-opps (1 messages):
- Seeking Norwegian Cohere Collaborators: A member is inquiring if there are any Norwegian companies, preferably consulting firms, which have experience with Cohere and can act as a reference or consultant for a project they are working to initiate.
LangChain AI â· #general (63 messagesđ„đ„):
-
Seeking Help with Groq/Mixtral Tool Calls: A member asked for tips on using LangChain with Groq/Mixtral for Tool_calls, noting Groq is limited to a single tool and parallel calls are disabled; they are considering how to execute single calls in sequence.
-
Vision Models Come to the Rescue: In discussions about processing documents âin the wild,â members suggested that Language Models (LLMs) are not sufficient on their own and that vision models are necessary for a generalized solution.
-
The Picture-Language Union Using LLama: A conversation about the latest methods for communicating images to language models revealed using a special image token in prompts that gets replaced by the output of the vision encoder, providing a base64 encoded image to convert visuals into language-readable format.
-
Real-time Chat Topic Management: One user sought advice on managing and categorizing topics in a real-time chat between clients and assistants, looking to associate chat messages with existing topics or create new ones where necessary.
-
Startup Interface for Vector Database Chat: As part of seeking a quick startup interface setup where customers can log in and chat with a vector database, LangChain was recommended along with tools like Groq or Llama, while also applying standard practices like setting up LangChain with needed API keys, creating a login system, and establishing a chat interface connected to the vector database.
Links mentioned:
- DLAI - Advanced Retrieval for AI with Chroma: Introduction · Overview of embeddings-based retrieval · Pitfalls of retrieval - when simple vector search fails · Query Expansion · Cross-encoder re-ranking · Embedding adaptors · Other Techniques
- Cross Encoder Reranker | đŠïžđ LangChain: This notebook shows how to implement reranker in a retriever with your
- ">no title found: no description found
- ChatGroq | đŠïžđ Langchain: Setup
- Quick Start | đŠïžđ Langchain: Large Language Models (LLMs) are a core component of LangChain.
- Quickstart | đŠïžđ Langchain: In this guide, we will go over the basic ways to create Chains and Agents that call Tools. Tools can be just about anything â APIs, functions, databases, etc. Tools allow us to extend the capabilities...
- ChatVertexAI | đŠïžđ Langchain: LangChain.js supports Google Vertex AI chat models as an integration.
- ChatVertexAI | đŠïžđ LangChain: Note: This is separate from the Google PaLM integration. Google has
- Issues · langchain-ai/langchain: đŠđ Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
LangChain AI â· #share-your-work (9 messagesđ„):
-
GitHub Project To Structure Web Data: Mishushakov introduced a new GitHub project called LLM Scraper, which can turn any webpage into structured data using large language models (LLMs). The community is encouraged to star the project on GitHub.
-
Assistance Requested for Product Hunt Ranking: Anthology_ seeks community support to reach number one on Product Hunt with their AI tool, AllMind AI: Your Personal Stock Analyst, which stands at #5 and boasts faster and cheaper financial insights compared to other models.
-
Launch of Knowledge Graph SDK at WhyHow.AI: Chiajy announced WhyHow.AIâs major upgrade with schema-controlled automated knowledge graphs that structure data from user-uploaded content. Details for the beta program and integration capabilities were shared, along with a link to the introduction post on Medium.
-
Community Input Sought on Real-Time Chat Analysis: Dewhysky seeks suggestions for managing topics/subjects/tasks in a real-time client and assistant chat, with the objective to associate messages with existing topics or create new ones as needed.
-
Server Specifications Inquiry for LLMs: Vijay187 inquired about server requirements for using a large language model, which ansh_ai identified as needing two A100 GPUs with 80GB each for llama 3 70b.
-
Understanding Watermarking in LLMs: Wisewander shared a resource regarding watermarking large language models, which involves embedding identifiable patterns in text generated by AI models like ChatGPT or Claude, detailed at Watermarking LLMs.
Links mentioned:
- AI Simply Explained: AI Simply Explained
- AllMind AI: Your Personal Stock Analyst - AI financial analyst with real-time market data & insights | Product Hunt: AllMind AI is your personal financial analyst, delivering centralized, real-time, actionable insights directly to you. Our proprietary LLM, AllMind AI, slashes research time by 90% and costs by 98%. W...
- GitHub - mishushakov/llm-scraper: Turn any webpage into structured data using LLMs: Turn any webpage into structured data using LLMs. Contribute to mishushakov/llm-scraper development by creating an account on GitHub.
LangChain AI â· #tutorials (1 messages):
- Bridging Natural and Structured Query with Langchain: A member has detailed the workings of the Self-querying retriever in a blog post, which discusses how Large Language Models (LLMs) and few-shot prompts build structured queries from natural language. The self-querying retriever enhances semantic similarity search by adding filtering capabilities to the results based on metadata.
Link mentioned: Building a Rental Apartment Search with Langchainâs Self-Querying Retriever: In this blog post, we delve into the capabilities of Langchainâs self-querying retriever, a powerful tool for bridging the gap between natural language and structured data retrieval. This retrievâŠ
tinygrad (George Hotz) â· #general (26 messagesđ„):
- Debating the Future of tinygrad: Members discussed whether tinygrad/box/chip might pivot to becoming a cloud service, referencing opinions about AI and cloud services, and expressing a range of opinions on having decentralized versus cloud-based AI services.
- TinyBox as AI Home Appliance: The vision for TinyBox is to serve as a home appliance running advanced AI models, which local devices can interact with, bypassing the need for cloud servers and tackling censorship issues.
- Portable AI Power vs. Cloud Scalability: The debate continued with comparisons between local high-end AI hardware like TinyBox and the efficiency of cloud services, highlighting issues such as intermittent AI usage by consumers and current AI hardware limitations.
- Local AI Trainingâs Future Importance: A user predicted that models will soon train on user data in real-time and emphasized the increasing relevance of local training hardware as models learn from smaller datasets.
- Weekly Meeting Points for tinygrad Developers: George Hotz outlined key discussion points for the weekly meeting, including the progress of mlperf, potential NVIDIA CI plans, and maintaining the tinygrad codebase under 7500 lines.
Link mentioned: React App: no description found
tinygrad (George Hotz) â· #learn-tinygrad (45 messagesđ„):
- tinygrad with ROCm Hurdles: A member is trying to set up tinygrad with ROCm but encounters segfaulting, looking for guidance post the ROCm 6.1 release.
- Stacking Tensors in tinygrad: In a detailed explanation, a member clarifies that
.stack
in tinygrad does realize the tensors by stacking them along a new dimension, while.realize()
must be explicitly called to materialize computations in memory. - Master Branch Stability for tinygrad: George Hotz affirms that the
master
branch of tinygrad should be stable and reliable due to robust CI processes, addressing a memberâs concerns about installation and functionality. - CUDA Compatibility and Windows Limitation: Members discuss the challenges and workarounds for using tinygrad with CUDA on Windows, including WSL and Docker methods, while another member confirms that Windows is not officially supported.
- In-Depth Guidance on tinygrad Mechanics: Several members exchange resources to understand deep aspects of tinygrad, such as memory management, shape tracking, and handling in-place operations, leading to discussions about implementation details and documentation contributions.
Links mentioned:
- How ShapeTracker works: Tutorials on tinygrad
- tinygrad-notes/uops-doc.md at main · mesozoic-egg/tinygrad-notes: Tutorials on tinygrad. Contribute to mesozoic-egg/tinygrad-notes development by creating an account on GitHub.
- tinygrad-notes/cuda-tensor-core-pt1.md at main · mesozoic-egg/tinygrad-notes: Tutorials on tinygrad. Contribute to mesozoic-egg/tinygrad-notes development by creating an account on GitHub.
- tinygrad/tinygrad/tensor.py at 37f8be6450b6209cdc9466a385075971e673c653 · tinygrad/tinygrad: You like pytorch? You like micrograd? You love tinygrad! â€ïž - tinygrad/tinygrad
- Meta AI: Use Meta AI assistant to get things done, create AI-generated images for free, and get answers to any of your questions. Meta AI is built on Meta's latest Llama large language model and uses Emu,...
DiscoResearch â· #mixtral_implementation (5 messages):
-
Llama3 vs. Mixtral Face-Off: A German RAG evaluation of Llama3 70b instruct was mentioned, but it appears that it doesnât perform as well as Mixtral-8x7B-Instruct-v0.1 based on this dataset.
-
Metric Discrepancies Questioned: A member raised concerns about why the âquestion to contextâ metric had large discrepancies compared to other metrics in the evaluation results. They suggested that âloglikelihood_acc_norm_nospaceâ might address the formatting issues causing these differences.
-
Potential Formatting Bug Spotted: The possibility of a formatting bug in the query template was highlighted, specifically the absence of the âAnswer:â part, which might impact the evaluation results. They referred to a relevant GitHub source for clarification.
-
Request for Command-R-Plus Comparison: A comparison between Llama3 70b instruct and command-r-plus was requested to assess their respective performances.
-
DiscoLM German 7b Evaluation Details Shared: A member shared detailed evaluation results of DiscoLM German 7b, noting significant improvement in 3 out of 4 categories over previously shared results and providing a performance comparison here.
Links mentioned:
- lighteval/src/lighteval/tasks/tasks_prompt_formatting.py at 11b48333b46ecd464cc3979de66038c87717e8d6 · huggingface/lighteval: LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron. - hug...
- deutsche-telekom/Ger-RAG-eval · Datasets at Hugging Face: no description found
- deutsche-telekom/Ger-RAG-eval · Datasets at Hugging Face: no description found
DiscoResearch â· #general (6 messages):
-
Innovative Chatbot Execution Strategies: Armifer91 is experimenting with categorizing chatbot functions into groups and implementing a function called âexecute_modelâ to handle the execution of function groups, a strategy inspired by the MoE (Mixture of Experts) model but adapted for business applications. They are concerned about the commercial viability due to the large prompt size and are exploring embedding functions to dynamically provide functionality without excessive prompt length.
-
Haystack Framework Enhances Chatbots: Vladimir0583 pointed out that the Haystack LLM framework can help with dynamically invoking services based on the userâs intent by indexing them as openapi specs. A GitHub notebook was provided detailing this approach: Haystack RAG Services Demo Notebook.
-
Seeking new tokens for Llama fine-tuning: Sinan2 inquired about adding new special tokens to Llama for fine-tuning, wondering if itâs as simple as editing the tokenizerâs JSON files and training, or if the process is more complicated.
-
Frustration with Platform Downtime: jp1 expressed dissatisfaction implying that the Hugging Face platform is down, followed by Maxidlâs comment indicating that this interruption spoiled the eveningâs activities.
Link mentioned: notebooks/haystack2x-demos/haystack_rag_services_demo.ipynb at main · vblagoje/notebooks: Contribute to vblagoje/notebooks development by creating an account on GitHub.
DiscoResearch â· #discolm_german (45 messagesđ„):
-
DiscoLM German Fine-Tuning Challenges: Members discussed the limitations of fine-tuning DiscoLM on German benchmarks, noting that without substantial examples and relevant data, benchmark scores can decrease. There was mention of a tokenization issue with DiscoLM and proposed workarounds, such as using other models like Instruct as a foundation.
-
Experimenting with Whisper Models: For German automatic speech recognition, suggestions were made to trial models such as whisper-tiny-german, whisper-base-quant-ct2, and AISAK-Listen, with additional advice on further finetuning or quantization for better quality and smartphone compatibility.
-
Conversation Templates and Tokenizer Confusions: Discussions about the template and tokenizer complexities within Llama-3 models ensued. It was highlighted that while using the ChatML template is standard, challenges arise with the tokenizer configuration, including having zero weights for special tokens and alternative eos_tokens for conversation turns.
-
Troubleshooting Model Generation Errors: Help was provided to a member facing challenges with getting DiscoLM German to generate proper responses. Suggestions included using the
generate
function without the attention mask and utilizing text generation pipelines for easier application. -
Llama3 Performance and Output Quality: Members debated the potential of improving Llama3âs performance in German, pondering whether the bottlenecks are computation or time. It was suggested to repeat the LeoLM style of training and reach out to the occiglot team for assistance, while also assessing the multilingual capabilities of the Llama3 70b model.
Links mentioned:
- cstr/llama3-discolm-orca · Hugging Face: no description found
- jvh/whisper-base-quant-ct2 · Hugging Face: no description found
- primeline/whisper-tiny-german · Hugging Face: no description found
- aisak-ai/aisak-listen · Hugging Face: no description found
Latent Space â· #ai-general-chat (53 messagesđ„):
-
Stretching the Context Window with Rope: Members discussed the absence of providers using rope to extend the context window of large language models, with some expressing interest in the approach. Context was provided through a Perplexity AI link.
-
High Quality Web Data Release, FineWeb: The release of FineWeb, containing 15 trillion tokens of web data, was brought to discussion, with a link to Twitter posted. FineWeb supposedly exceeds previous datasets like RefinedWeb and C4 in model performance.
-
Hydra Framework Spurs Varied Reactions: The AI community shared experiences with the Hydra framework from Facebook Research, designed for elegantly configuring complex applications. Some found it excellent for managing ML experiments (GitHub link to Hydra), while others questioned its uniqueness.
-
Phi-3 Gains Weight: There was buzz about Microsoftâs Phi-3 release, a successor to Phi-2 with three versions, all larger in size. Conversation included a Tweet about Phi-3 and speculation on its performance compared to other models like llama 3 8B.
-
Perplexity.ai Fundraising Success: Remark was made on the recent funding announcement for Perplexity.ai, which has gained preference among some users over traditional search engines. The fundraising tweet can be found here.
Links mentioned:
- Tweet from Guilherme Penedo (@gui_penedo): We have just released đ· FineWeb: 15 trillion tokens of high quality web data. We filtered and deduplicated all CommonCrawl between 2013 and 2024. Models trained on FineWeb outperform RefinedWeb, C4, ...
- Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone: We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of ...
- AgentKit: Flow Engineering with Graphs, not Coding: We propose an intuitive LLM prompting framework (AgentKit) for multifunctional agents. AgentKit offers a unified framework for explicitly constructing a complex "thought process" from simple n...
- Tweet from yi đŠ (@agihippo): phi is a good litmus test to tell who understands LLMs and who doesn't.
- Tweet from Aravind Srinivas (@AravSrinivas): Excited to announce we've raised 62.7M$ at 1.04B$ valuation, led by Daniel Gross, along with Stan Druckenmiller, NVIDIA, Jeff Bezos, Tobi Lutke, Garry Tan, Andrej Karpathy, Dylan Field, Elad Gil, ...
- Tweet from Aran Komatsuzaki (@arankomatsuzaki): Microsoft just released Phi-3 - phi-3-mini: 3.8B model trained on 3.3T tokens rivals Mixtral 8x7B and GPT-3.5 - phi-3-medium: 14B model trained on 4.8T tokens w/ 78% on MMLU and 8.9 on MT-bench http...
- GitHub - facebookresearch/hydra: Hydra is a framework for elegantly configuring complex applications: Hydra is a framework for elegantly configuring complex applications - facebookresearch/hydra
- GitHub - facebookresearch/mbrl-lib: Library for Model Based RL: Library for Model Based RL . Contribute to facebookresearch/mbrl-lib development by creating an account on GitHub.
- mbrl-lib/mbrl/examples/conf/dynamics_model/gaussian_mlp_ensemble.yaml at main · facebookresearch/mbrl-lib: Library for Model Based RL . Contribute to facebookresearch/mbrl-lib development by creating an account on GitHub.
- mbrl-lib/mbrl/examples/conf/main.yaml at main · facebookresearch/mbrl-lib: Library for Model Based RL . Contribute to facebookresearch/mbrl-lib development by creating an account on GitHub.
- YAML Ainât Markup Language (YAMLâą) revision 1.2.2: no description found
Latent Space â· #ai-announcements (1 messages):
- LLM Paper Club Dives into Time Series with TimeGPT: Tomorrowâs US paper club is discussing TimeGPT, a paper on time series, featuring the authors and <@556359685306056721>. Remember to sign up for notifications and that the event will take place on Zoom, not Discord.
- Stay Up-to-date with Latent Space Events: Latent.Space encourages users to click the RSS logo above the calendar on the right to add events to their calendar. âAdd iCal Subscriptionâ will appear on hover for easy event tracking.
Link mentioned: LLM Paper Club (TimeGPT paper WITH AUTHORS) · Zoom · Luma: This week @Vibhu hasa invited Nixtla to cover TimeGPT: https://arxiv.org/abs/2310.03589 Also submit and vote for our next paper:âŠ
Latent Space â· #ai-in-action-club (1 messages):
alan_95125: Selfcheck, both the Evauator & Evaluatee models are the same by definition.
Mozilla AI â· #llamafile (24 messagesđ„):
- Llama 3 70b Recommended Over 8b: One user indicated a preference for using llama 3 70b, as they have not been able to get the 8b version working on Llamafile. The Q2 weights for 70b were mentioned to be only 26GB.
- Quantization Quirks: A user reported issues with the Q2 variant of llama model on an M1 Pro system, resulting in garbled output. Another user noted the modelâs functionality in a pure CPU mode, albeit operating more slowly.
- Android Ambitions Thwarted by Address Space: Interest in running llamafile on Android was discussed, but it was explained that Android support isnât possible without a 47 bit address space.
- Redis Inventor Endorses Llamafile: The creator of Redis shared a positive sentiment about the llama3 70b llamafile on Twitter, offering an endorsement that the Llamafile team celebrated.
- Multimodal Port Management: A user inquired about controlling what port a model runs on with the goal of simultaneously running multiple llamafile instances, and another user suggested using the
--port
flag to achieve this.
Links mentioned:
- Discord - A New Way to Chat with Friends & Communities: Discord is the easiest way to communicate over voice, video, and text. Chat, hang out, and stay close with your friends and communities.
- llama.cpp/.devops/main-vulkan.Dockerfile at master · ggerganov/llama.cpp: LLM inference in C/C++. Contribute to ggerganov/llama.cpp development by creating an account on GitHub.
Skunkworks AI â· #general (3 messages):
-
4chanâs Insight on Context Size: A member mentioned an assertion from 4chan, suggesting that a certain AI has had 32k context the entire time, expressing surprise at this revelation.
-
Alpinâs Take on Scaling: The discussion includes a member summarizing Alpinâs approach to scaling, talking about the use of dynamic ntk and linear scaling without the use of rope but maintaining that it should still be effective.
-
Mattâs Config for Long Context AI: The member shared a link to Mattâs 16k configuration for the Llama model on Hugging Face, providing a JSON snippet with parameters like âmax_position_embeddingsâ: 16000 and âmodel_typeâ: âllamaâ. Access the file here.
Link mentioned: config.json · mattshumer/Llama-3-8B-16K at main: no description found
Skunkworks AI â· #datasets (1 messages):
noob_master169: OCR dataset for less popular languages? mainly looking for doc type data
Skunkworks AI â· #finetuning (10 messagesđ„):
- Seeking Simplification of Medical Knowledge: A physician scientist inquired about fine-tuning an LLM to explain complex genetic and medical information at a 6th grade reading level. They expressed interest in adapting the explanation process for patients with lower educational backgrounds.
- Agentic System Over Fine-Tuning: It was suggested that rather than immediately fine-tuning a model, one could develop an agentic system that manages tasks through specialized stages, likening it to a corporate workflow.
- From Medical Jargon to Laymanâs Terms: The advice further detailed a multi-stage approach: comprehend medical lab results using existing models enhanced by medical ontologies, summarize them at a professional level, then translate the summary to a 6th-grade level.
- Data-Driven Fine-Tuning Direction: The final recommendation was to utilize the strongest available model to collect inputs and outputs, which, after sufficient time in production, could lead to enough data to perform targeted fine-tuning for the specific task of simplifying medical information directly.
- Surprised by Agent Efficiency: The inquirer was surprised by the suggestion of using an agent for the task, having previously assumed that fine-tuning would be necessary to achieve the desired simplification of medical content.
Skunkworks AI â· #moe-main (1 messages):
getovahit: Enjoyed this! Thanks for sharing your work
LLM Perf Enthusiasts AI â· #general (3 messages):
- Excitement Over Meta AIâs âImagineâ: A member expressed enthusiasm about Meta AIâs âImagineâ, calling it insane.
- Call for Imagine Examples: Following the excitement about Meta AIâs Imagine, another member asked for examples to illustrate the capabilities or outcomes.
- In Search of Dev Tools for LLMs: A member sought recommendations on development tools that are popular or preferred for working with Large Language Models (LLMs).
LLM Perf Enthusiasts AI â· #speed (5 messages):
- Struggle with Azure OpenAI Latency: A member described experiencing significant latency issues with Azureâs OpenAI, with some requests taking up to 20 minutes.
- Rate Limit Woes: Another sentiment expressed frustration at being constantly rate-limited on Azure, with just two requests within 15 seconds triggering the backoff strategy.
- Possible Azure Latency Culprit: A member pointed out that Azureâs latency issues could have been specific to today due to reported service problems.
- Tracking API Response Times: The shared link from GPT for Work provides real-time tracking of API response times of major large language models, including OpenAI and Azure OpenAI, with suggestions for how to potentially achieve a faster response time.
Link mentioned: OpenAI API and other LLM APIs response time tracker: no description found
Datasette - LLM (@SimonW) â· #ai (2 messages):
- Blueprint AI in Architecture: A member shared that a major architecture firm is using AI as a âpreflightâ tool to identify potential issues and code violations in architectural plans. However, the firm has not yet adopted AI for generating content during the blueprint phase.
- Seeking AI for Blueprint Interpretation: The discussion also touched on exploring AI models or approaches for interpreting blueprints, particularly focused on tracing ductwork in PDF plans. No specific models or solutions were provided in the conversation.
Datasette - LLM (@SimonW) â· #llm (2 messages):
-
Llama 3 Makes a Grand Entry: Llama 3 was released, showcasing impressive results by ranking joint 5th on the LMSYS arena leaderboard, right behind major players like Claude 3 Opus and some GPT-4 variants. This open-licensed model even has the capability to run on high-end laptops.
-
SimonW Unveils Tools for Llama 3: Simon Willison introduces LLM, a command-line tool and Python library that facilitates access to Llama 3 and many other models. His blog post details several ways to access Llama 3, both through hosted versions and on local hardware, highlighted here.
-
Request for Hackernews Summary Generator: A member is asking for the latest version of a hackernews summary generator, which they recall seeing in the form of a bash script.
Link mentioned: Options for accessing Llama 3 from the terminal using LLM: Llama 3 was released on Thursday. Early indications are that itâs now the best available openly licensed modelâLlama 3 70b Instruct has taken joint 5th place on the LMSYS arena âŠ
AI21 Labs (Jamba) â· #general-chat (4 messages):
- Spam Alert in General Chat: The channel had multiple spam messages promoting inappropriate content with a Discord invite link.
- Curiosity about Jambaâs Requirements: A member inquired about the compatibility of Jamba with LM Studio and its operational requirements given that it boasts memory capacity akin to Claude.
- Jamba Running Challenges Discussed: Thereâs a discussion on the difficulty of running Jamba due to high RAM requirements, with a mention that Google Colab didnât provide sufficient resources and attempts on Google Cloud were unsuccessful.
Link mentioned: Join the NSFW // 18 đđ Discord Server!: Check out the NSFW // 18 đđ community on Discord - hang out with 31716 other members and enjoy free voice and text chat.