Andrew Ngâs The Batch writeup on Agents made a splash across all platforms this weekend:
Devinâs splashy demo recently received a lot of social media buzz. My team has been closely following the evolution of AI that writes code. We analyzed results from a number of research teams, focusing on an algorithmâs ability to do well on the widely used HumanEval coding benchmark. You can see our findings in the diagram below.
GPT-3.5 (zero shot) was 48.1% correct. GPT-4 (zero shot) does better at 67.0%. However, the improvement from GPT-3.5 to GPT-4 is dwarfed by incorporating an iterative agent workflow. Indeed, wrapped in an agent loop, GPT-3.5 achieves up to 95.1%.
Nothing here is new to people who have studied the agents field, but Andrewâs credibility and agent framework (very close to Lilian Weng + the recent new metagame of multiagent collaboration) sells it.
We published The Unbundling of ChatGPT today. Also Emad stepped down from Stability, and there are more Sora videos out, make sure to check out the Don Allen Stevenson III one.
Table of Contents
[TOC]
weâve added more subreddits, and are synthesizing topics across them. Comment crawling still not implemented but coming along.
Stable Diffusion Models and Techniques
- New Stable Diffusion models and techniques are being developed, such as Cyberrealistic_v40, Platypus XL, and SDXL Lightning for generating Naruto-style images. (Playing with Cyberrealistic_v40, Still Liking Platypus XL, Naruto (Outputs from SDXL Lightning))
- /r/StableDiffusion: LoRA and upscaling methods are being explored to improve image quality, such as cartoonizing images while preserving content and a general purpose negative prompt LoRA to eliminate common problems. (best LORA or method with sd1.5 to cartoon-ize an image while keeping content? (flatten shading, reinforce outlines), General purpose negative prompt?)
- /r/StableDiffusion: New workflows and extensions are being developed for Stable Diffusion, such as BeautifAI for upscaling in ComfyUI, FrankenWeights for mixing model weights, and integrating Prompt Quill expansion in Fooocus. (BeautifAI - Image Upscaler & Enhancer - ComfyUI, Itâs alive! FrankenWeights is coming⊠[WIP], Prompt Quill in Fooocus)
Local LLM Deployment and Optimization
- /r/LocalLLaMA: Deploying large language models locally is a popular topic, with discussions around hardware requirements, inference speed, and model selection for different use cases. (Would it make sense to stick a P40 24GB in with a 3090 to have 48GB VRAM?, Best output quality for 4090 & 64GB RAM?, What is your computer specs?)
- /r/LocalLLaMA: Optimizing LLM performance is an active area of research, with discussions around architectures for reasoning, finetuning strategies, and serving multiple users efficiently. (What architecture will give us a reasoning LLM ?, All work and no play makes your LLM a dull boy; why we should mix in pretraining data for finetunes., Is it possible to serve mutliple user at once using llama-cpp-python ?)
- /r/LocalLLaMA: Guides and resources are being developed to help users get started with local LLMs, from beginner to advanced levels. (New user beginning guide: from total noob to well-informed user, part 1/3, another tryâŠ)
Machine Learning Research and Techniques
- /r/MachineLearning: New machine learning architectures and techniques are being proposed and discussed, such as Treeformers using hard attention and decision trees for causal language modeling. ([P] Treeformer: hard attention + decision trees = causal language modelling)
- /r/MachineLearning: Optimization techniques for deploying ML models are being explored, such as using TensorRT for fast PyTorch model inference. ([D] Looking for fastest inference way to run a pytorch model on TensorRT)
- /r/MachineLearning: Debugging and improving ML models is an ongoing challenge, with discussions around understanding and fixing issues like spiking test loss. ([D] Does anyone know why my test loss is spiking so crazily?)
AI Assistants and Applications
- /r/OpenAI: AI assistants are being used in new ways, such as mediating arguments to provide a neutral perspective and helping with coding tasks. (Mediating Arguments with ChatGPT, Coding LLM that runs on 3090)
- /r/StableDiffusion: New AI applications are being developed, such as AI influencers, interactive âAI Brushâ tools, and immersive experiences to explore AI-generated worlds based on images. (Here is my first 45 days of wanting to make an AI Influencer and Fanvue/OF model with no prior Stable Diffusion experience, Quick Breakdown of my interactive âAI Brushâ build with StreamDiffusion, Excited for the future)
Memes and Humor
- AI-generated memes and humorous content continue to be popular, poking fun at the current state of AI. (Do not generate a tree using a model trained on p*rn, âDonât ever buy no weed from the gas station broâ, It do be like that)
PART X: AI Twitter Recap
all recaps done by Claude 3 Opus, best of 4 runs
Model Releases & Updates
- Mistral AI released a new 7B 0.2 base model with 32k context window, announced at a hackathon (13k views)
- Quantized 4-bit Mistral 7B models released, enabling 2x faster inference with 70% less VRAM using QLoRA finetuning (14k views)
- Mistral 7B 0.2 expected to outperform Yi-9B (486 views)
Open Source Efforts & Challenges
- Stability AIâs Emad Mostaque out following investor mutiny and staff exodus (11k views)
- Open source AI would be better if AAA games werenât âtotal shitâ for the last decade, making high-end GPUs not worth it for gaming (1.7k views)
- Open-source AI isnât real without distributed pretraining, canât depend on VCs spending millions then giving it away (468 views)
Emerging Applications & Demos
- Financial agent application built with LangChain, can get stock prices, financials, market news (46k views)
- Telegram proxy setup guide to circumvent potential ban in Spain using built-in proxy feature (50k views)
- Dancing robot powered by Mistral 7B demoed at hackathon (9.8k views)
- Claude-to-Claude conversations induce concerning outputs like âpsychotic breaksâ (53k views)
PART 0: Summary of Summaries of Summaries
-
Mistralâs New 7B v0.2 Base Model Drops: Mistral AI casually released their new Mistral 7B v0.2 Base model at the @cerebral_valley hackathon, featuring a 32k context window and other improvements detailed in their release notes. The AI community is abuzz with the implications and benchmarking results of this significant update.
-
Stability AI CEO Emad Mostaque Resigns: In a major shakeup, Emad Mostaque resigned as CEO of Stability AI to pursue decentralized AI. Interim co-CEOs Shan Shan Wong and Christian Laforte will lead the search for a permanent replacement. Speculation is rife about the companyâs future direction and commitment to open-source initiatives amidst this leadership transition.
-
Anthropicâs Claude Shines Despite Limitations: Users praise Anthropicâs Claude for its performance and context, especially the self-moderated version, but express frustration with the strict 1M token per day rate limit on the 200k context window. The $500 scale plan is suggested as a more accessible option for extensive usage, while the potential of Claudeâs API for open-source development generates excitement.
-
Optimizers and Architectures Advance LLMs: Novel optimizers like GaLore and architectures like DenseFormer are pushing the boundaries of language model training efficiency and performance. Discussions revolve around GaLoreâs significant VRAM savings and potential over-training risks, while DenseFormerâs depth-weighted averaging shows promising perplexity improvements. The community eagerly awaits further developments in these areas.
-
AI Assistants and Agents Evolve: Projects like Open Interpreterâs 01 Light, a fully open-source personal AI agent, and World Simulator from Nous Research are capturing the communityâs imagination with their engaging experiences and potential for customization. Meanwhile, frameworks like LangChain are enabling more sophisticated decision-making and task automation for AI agents, as evidenced by various shared guides and tutorials.
PART 1: High level Discord summaries
Stability.ai (Stable Diffusion) Discord
-
SD Ecosystem Buzzing: The community is actively discussing Stable Diffusion models, particularly in anticipation of the upcoming SD3 release, with buzz around potential improvements and comparative analysis with models such as SDXL. Issues surrounding compatibility with AMD GPUs were also raised, with members sharing solutions and workarounds.
-
AI Art at a Click, But Not Without Hiccups: Frustrations were voiced over online AI image generation services like Civitai and Suno, citing content restrictions and types of content generated. Community members shared resources such as Stable Cascade Examples to showcase different model capabilities.
-
Regulatory Rumbles: A polarized debate unfolded on the implications of regulation in AI technology. Ethical considerations were weighed against fears of stifling innovation, reflecting a community conscious of the balance between open-source development and proprietary constraints.
-
Tech Support Tribe: A knowledge-sharing atmosphere prevails as newbies and veterans alike navigate technical tribulations related to model installations. Resources for learning and troubleshooting were shared, including direct links to support channels and expert advice within the community.
-
Connecting AI Threads: Various links were circulated for further information and utilities, such as a Stable Diffusion Glossary, and a comprehensive multi-platform package manager for Stable Diffusion StabilityMatrix. These tools are meant to aid understanding and enhance usage of Stable Diffusion products among AI engineers.
Unsloth AI (Daniel Han) Discord
- Innovating Within Memory Limits: A recent pull request on the PyTorch
torchtune
repository allows full model fine-tuning while keeping the memory footprint under 16GB, enabling users with consumer-grade GPUs to train more efficiently. - Enhancing Fine-Tuning Capabilities: The latest release (v0.10.0) of the Hugging Face PEFT includes LoftQ, which improves fine-tuning for large models.
- ORPO Implementation with Mistral: A user reported effective application of ORPO TRL on Mistral-7B-v0.2 using the argilla/ultrafeedback-binarized-preferences-cleaned dataset, suggesting that the method has potential for further optimization.
- Debating Best Practices for Training AI Models: Discussions across channels engage with the challenges of multi-turn dialogue training in ORPO, the importance of standardized formats, and pressing issues with using Ollama templates and different quant models.
- Model Performance Milestones: The community celebrates the performance of new models like sappha-2b-v3 and MasherAI-v6-7B which reportedly surpassed benchmarks after fine-tuning with Unsloth on Gemma-2b.
Perplexity AI Discord
-
Perplexity Pro Users Clash Over Image Generation: The Pro users discussed Perplexityâs image generation feature and noted that turning off the Pro toggle on the web version allows for image generation using Writing mode.
-
Model Showdown: Claude Opus vs. GPT-4 Turbo: Engineering chatter touched upon comparing Claude 3 Opus and GPT-4 Turbo, highlighting that GPT-4 Turbo can compile Python files, unlike Perplexity.
-
Stability AI Strikes Local Chord: There was a buzz about Stability AIâs local models, like SDXL, and the tradeoff between performance and the hefty costs of running these tools on personal hardware.
-
Perplexity Puzzles and Potential: Users were bewildered by some aspects of Perplexity, including intrusive search triggers and unrelated prompts, while also envisioning features like Claude 3 Opusâs API and integration with iOS Spotlight search.
-
Coding Within Token Boundaries: A crucial tip for engineers working with the Perplexity API was to heed the 16,384 token limit, with suggestions to use tools like OpenAIâs tokenizer to gauge token counts accurately and adhere to the limit for optimal operation.
LM Studio Discord
-
Taming the VRAM Beast for AI: Engineers debate the best GPUs for running AI models, with the RTX 3090âs 24GB VRAM being a popular choice. Compatibility issues between differing GPU makes, such as AMD and NVIDIA, were noted when configuring multi-GPU setups.
-
Local LLMs Stirring a Tech Revolution: Discussions about the feasibility of distributed computing for LLMs revealed skepticism due to high CPU utilization when using ZLUDA with ROCm, despite the appeal of local computing akin to the Linux/LAMP era.
-
Docs Unleashed and Multi-Model Mania: LM Studio launched a new documentation site and also introduced a Multi Model Session feature, which is explained in a tutorial video.
-
LM Studioâs Growing Pains and Performance Quirks: Users reported issues from high CPU usage to models outputting gibberish, which sometimes were resolved by a simple restart. Compatibility was questioned for older hardware like the RX 570 with ROCM, and errors like âExit code: 42â when loading models signaled a need for continued troubleshooting.
-
Open-interpreter Unpacked and GGUF Model Performance: Open-interpreter issues included connection problems and discussions on various GGUF-compatible modelsâ performance. The Open Interpreter device attracted interest, with options to 3D print it oneself using free STL files from 01.openinterpreter.com/bodies/01-light. Meanwhile, non-blessed model errors prompted gaze towards issue #1124 on open-interpreterâs GitHub.
OpenInterpreter Discord
-
Open Interpreter Makes Leap to Linux: Users have managed to get Open Interpreter running on Ubuntu 22.04, discussing microphone support and client-server mechanics, signaling a push towards cross-platform compatibility.
-
DIYers Assemble! The O1 Light Craze: Enthusiastic members of the community are sharing tips on 3D printing and assembling their own 01 Light devices, with dedicated Discord spaces popping up to share build experience and design tweaks.
-
AI-Powered Engineering Discussions Heat Up: Technical conversations are expanding around the Open Interpreter, with focus on running the 01 server on various machines, whether low-spec or cloud-based, and enhancing installers for user-friendliness. Developers are also brainstorming integrations for Groq and extending 01 Light functionalities.
-
Community Contributions Unleash Open Source Power: The AI engineer community is diving into contributions for the Open Interpreter project, focusing on app development, performance of different LLMs, and potential desktop app for Apple silicon devices.
-
Open-Source AI Assistants Set the Stage: An insightful YouTube video on the 01 Lite titled âOpen Interpreterâs 01 Lite - WORLDâS FIRST Fully Open-Source Personal AI AGENT Deviceâ has been highlighted, showing off the capabilities of this homegrown AI assistant. An edited live stream has also been shared to provide a concise overview of the 01 software Open Interpreterâs 01 Lite.
LAION Discord
-
EU Data Laws Challenge LAIONâs Efficiency: LAION datasets may be underperforming in comparison to US datasets, largely due to the EUâs stringent regulations. The use of synthetic data and forming collaborations in less restrictive regions were mentioned as possible workarounds, humorously termed âdata laundering.â
-
Leadership Shuffle at Stability AI: Emad Mostaque has stepped down as CEO of Stability AI, with the company confirming his resignation and the appointment of interim co-CEOs Shan Shan Wong and Christian Laforte, who will oversee the search for his replacement. There is speculation about the impact on the companyâs future and its commitment to open-source (Stability AI Press Release).
-
Comparing SD3 to DALL-E 3: Discussions indicate that the SD3 model can match some aspects of DALL-E 3 performance, but struggles with complex interaction understanding, leading to collage-like image assembly, rather than cohesive concept blending.
-
AI Ethics Debate Surfacing Amidst Industy Drama: A recent conversation on Twitter about the motives behind leading AI industry figures led to a guild-wide debate regarding the ethical responsibilities of developers and researchers, along with the impact of AI âcelebrityâ culture on social media.
-
AMD GPUs Fall Behind in AI Support: Guild members expressed dissatisfaction with AMDâs support for machine learning workloads when compared to NVIDIAâs offerings. The lack of consumer-level ML support is viewed as a potential oversight given the rise of models like Stable Diffusion.
-
Andrew Ng Foresees AI Workflow Evolution: Google Brainâs co-founder Andrew Ng predicts that AI agentic workflows could surpass next-generation foundation models this year by iterating over documents multiple times. Current one-shot LLM approaches need to evolve (Reddit Highlight).
-
MIT Accelerates Image Generation Tech: MITâs CSAIL developed a method that speeds up the image generation process for tools like Stable Diffusion and DALL-E by 30 times, using a streamlined single-step teacher-student framework without compromising on image quality (MIT News Article).
-
NVIDIA Addresses Diffusion Model Training Hurdles: NVIDIAâs recent blog post discusses the improvements in training diffusion models, including the EDM2 code and model release. They address style normalization issues that could be overcome with changes similar to those in EDM2 (NVIDIA Developer Blog Post).
-
Unetâs Future in Question with Rise of Linear Networks: Guild members debated the relevance of advancements in Unet given the rise of linear network models for image generation. Although layer norm and traditional normalization methods are questioned, their integral part in network functionality remains a topic of discussion.
-
Large Language Models Show Resilience with Pruning: Insights reveal that large language models (LLMs) retain performance even when middle blocks are removed, hinting at the redundancy of certain segments. This has encouraged a deeper look into the architecture of linear networks and their potential for strategic pruning.
Nous Research AI Discord
-
Shipping Queries and DIY Solutions: Members are curious about shipping timelines for an unnamed product, anticipating summer availability. They discussed alternative DIY options in the absence of specific release dates.
-
AI Model Innovation and Optimism: Enthusiasm for Claudeâs impact on open-source projects is high, and an implementation of Raptor without pretraining is reported to summarize 3b model transcripts in just 5 minutes. References to FastAPIâs ease-of-use for back-end development were shared, alongside a note about Suno.AIâs ability to curate Spotify playlists.
-
Deployment and Persuasion Techniques Analyzed: Kubernetes is leveraged for deploying Nous models, while a pre-registered arXiv study analyses the persuasive power of LLMs. Platforms like ArtHeart.ai are recognized for AI-driven art creation, with BitNet 1.5âs quantized-aware training reviewed for inference speedups.
-
Weaving Worlds and Aiding Therapists with AI: The World Simulator project revealed a remarkable level of engagement, while work on an AI therapist dubbed Thestral aims to use the LLaMA 70B model. The community is actively discussing the ethical constraints of Opus from Claude 3, the impact of refusal prompts in models like Hermes 2 Pro, and manipulation techniques to circumvent LLM limitations known as the âOverton Effect.â
-
LLMs, Tuning, and Refinement Questions: Debates arose around the inclusion of few-shot prompts in SFT datasets and the quest for tiny LLMs, with recommendations to watch Andrej Karpathyâs videos. The importance of causal masking and the mysteries behind Llamaâs tri-layer feedforward design incited discussions, circling an arXiv paper on SwiGLU nonlinearity.
-
Parenting, Open Sourcing, and RAFTâs Prominence: Members touched upon their experiences with parenthood while seeking open-source options like a Wikipedia RAG Index. The conversation highlighted a departure towards a promising retrieval-augmented fine-tuning method called RAFT, which was discussed in a shared paper and can be explored in the Gorilla GitHub repository.
-
Casual Chats and World-Sim Tech: A member hinted at changing language settings on Tenor.com, displaying a shared Grim Patron GIF, while a simple âhellooooâ from another brightened the chat.
OpenAI Discord
-
Sora Shapes the Future of Filmmaking: OpenAIâs Sora is celebrated for empowering artists and filmmakers to create groundbreaking surreal art. Director Paul Trillo lauds Sora for enabling the visualization of previously unimaginable ideas, with its potential highlighted on the OpenAI blog.
-
Diving into AIâs Cultural Compass: A hot topic in AI discussions is the perceived Western liberal-centrist bias in language models like GPT, sparking debates on whether there should be multiple culturally aligned AI versions. Efforts to align AI with non-Western norms are facing challenges, and the âCustomize ChatGPTâ feature was cited as a tool for users to personalize AI responses with their values.
-
GPT-4âs Evolving Features and Access Concerns: Members noticed a reduction in the Custom GPT pin limit and sought a keyboard shortcut for shared GPT access. OpenAI confirmed the capability of GPT-4 with Vision to read images, while also announcing the end of the ChatGPT plugins beta through a discontinuation notice.
-
Refining AI to Enrich User Experience: Strategies are shared for enhancing AIâs creative writing, narrative style, and coding output quality. A user faced issues with the
.Completion
endpoint due to an OpenAI SDK update, and was directed to the v1.0.0 Migration Guide on the openai-python GitHub repository for assistance. -
Enabling Accessibility in Vision: Privacy-sensitive advice was provided to a member looking to improve image recognition for disabled individuals, with an emphasis on writing up the issue for a Discord suggestions channel. Users also explored prompt engineering tactics to craft AI with specific personalities, and to refine the generation of hypothesis paragraphs avoiding generic statements.
HuggingFace Discord
AI Art Prompt Guide Quest: Users are seeking advice on crafting prompts for AI-generated art, though specific resources werenât provided.
Blenderbotâs Role-Play: Discussions highlight Blenderbotâs ability to exhibit consistent character traits during interactions, in contrast to AI that acknowledges its non-human nature.
GPU Operation Showdown: A technical debate unfolded around the execution speed differences between multiplication and conditional checking on GPUs. Look into âiqâs work was suggested for further insights.
Complex Creativity for ChatGPT: A user requested a linguistically diverse and creative prompt for ChatGPT, prompting another to exclaim over the promptâs complexity.
Optimizing GPU Inference: The community explored methods and libraries like TensorRT-LLM and exLLama v2 for optimizing large language model inferencing on GPUs, with suggestions for tools ideal for simultaneous multi-user serving.
Rustâs Rising Star: Conversations around converting the GLiNER model to Rust via the Candle library noted benefits including reduced dependencies and suitability for production, with GPU compatibility confirmed.
Efficient Coding with Federated Learning: An open-source GitHub project demonstrates an energy-efficient approach to federated learning for load forecasting.
Compiling the Stable Diffusion Compendium: A plethora of resources and guides for Stable Diffusion have been shared by community members, including civitai.com for comprehensive learning on Stable Diffusion.
Deck Out Your Memory â Diffusers Edition: An experimental tool for estimating the inference-time memory requirements of DiffusionPipeline
has been released for feedback.
SegGPT: The Contextual Segmentor: Introducing SegGPT on HuggingFace, a model with impressive one-shot segmentation that can be trained for various image-to-image tasks.
BLIP-2 Ups the Fusion Game: In vision-language model fusion, BLIP-2 has been recommended for connecting pre-trained image encoders with language models, further elaborated in the transformers documentation.
Embedding Precision with Quantization: Embedding Quantization for Sentence Transformers brings major search speed improvements without compromising retrieval accuracy.
Catering to the German Learners: A GPT-powered German language learning tool named Hans promises enhanced user experience for German learners and is available on the GPT Store.
All-MiniLM-L6-v2 Download Dilemma: A user looked for assistance in downloading and training the all-MiniLM-L6-v2 model, emphasizing the power of community support for model implementation.
Revolutionizing Decision-Making with Langchain: An article on Medium posits Langchain as a transformative approach to how language agents resolve problems, available on Medium.
Diving Into Dataâs Importance: A shared arXiv paper emphasizes the significance of data as a potential critical influencing factor, reminding us of the indispensable value of quality data.
NEET/JEE Data Quest: A dataset of NEET/JEE exams is being sought for training MCQ answer generators, indicating the intersection of AI technology and educational resources.
AI on the Forefront: Recurrent Neural Notes newsletter discusses the potential limits of AI, possibly providing nuanced insights on future AI capabilities available on Substack.
LlamaIndex Discord
Twitter Sneak Peek on Human-LlamaIndex Workflow: A new template was introduced to streamline interactions between humans and LlamaIndexâs agents, slated to reduce intrusiveness for users. The details and a preview were shared on Twitter.
Integrating Custom LLMs with LlamaIndex: Leonie Monigatti detailed the process of incorporating custom Language Models (LLMs) into LlamaIndex, with an explanation available on LinkedIn.
Guide to Building RAG Agent for PDFs: A tutorial by Ashish S. on creating a LlamaParse-powered RAG flow for PDF files was published and can be viewed in its entirety via this Tweet.
New LlamaIndex Python Documentation Released: LlamaIndex has updated its Python documentation to feature example notebooks better, improved search, and clearer API layouts, announced in a Twitter post.
LlamaIndex Community Tackles Integration and Documentation Challenges: Discussions in the community highlighted various integrations with Merlin API and LocalAI, an inquiry about the logic in LlamaIndexâs evaluation process, conflicting documentation post v0.10 updates, requests for examples of multi-agent chatbots, and turning Python functions into LlamaIndex tools. Users exchanged resources, including several documentation links and GitHub code examples.
Latent Space Discord
-
Whisper for Video Processing: Community members are seeking a video processing tool comparable to OpenAIâs Whisper, and suggestions included Video Mamba, Twelve Labs, and videodb.io.
-
OpenAIâs Sora Gains Traction Amongst Creatives: OpenAIâs introduction of Sora has garnered positive feedback from artists, showcasing the toolâs versatility in generating both realistic and imaginative visuals.
-
Google Confounds with AI Services: Discussion revealed confusion between Googleâs AI Studio and Vertex AI, particularly with the formerâs new 1 million token context APIs, drawing comparisons with OpenAIâs API for model deployment.
-
AI Wearables Gaining Popularity: The ALOHA project, an open-source AI wearable, is discussed amid conversations about the rise of AI wearables. Pre-orders for another AI wearable, Compass, began, indicating a thriving interest in local, personal AI solutions.
-
Efficiency in LLMs With LLMLingua: Microsoftâs LLMLingua was shared as a promising tool for compressing prompts and KV-Cache in Large Language Models (LLMs), achieving significant compression rates with little loss in performance.
-
Insider AI Discussion Takes Podcast Form: A podcast episode highlighted with a tweet provided insights into major AI companies, sparking interest in the AI community.
-
AI Unbundling Trend Spotted: An essay featured on latent.space discussed the unbundling of ChatGPT, suggesting that specialized AI services are becoming more popular as user growth for generalist models stagnates.
-
Paper Club Hiccups: The llm-paper-club-west faced technical difficulties with speaking rights on Discord, causing the meeting to switch over to Zoom and raising awareness for the need to streamline access for future online gatherings.
-
Ideas and Music Flow in AI in Action Club: The club had vibrant discussions on tensor operations, coding best practices for LLMs, and spontaneous sharing of music evoking the calm of night from Slono on Spotify. They also released a schedule for upcoming sessions on AI topics.
OpenAccess AI Collective (axolotl) Discord
GaLore Optimization Sparks Debate: The GaLore optimizer discussion highlighted its VRAM savings abilities but also raised the question of potential over-training due to âcoarseness.â Some engineers are eager to test GaLore out, especially in light of the new Mistral v0.2 Base Model release, which now has a 32k context window.
Fine-Tuning Large Language Models on a Budget: Technical discussions surfaced around fine-tuning a 7b model within 27gb of memory, with a spotlight on a GitHub repository called torchtune that allows for efficient fine-tuning without Huggingface dependencies. A specific pull request was recommended to review full fine-tune methods requiring less than 16GB of RAM.
TypeError Troubles and Help Channel Support: A member grappling with a TypeError
in âexamples/openllama-3b/qlora.ymlâ was directed to a specialized help channel (#1111279858136383509) for expertise in resolving it. This exemplifies the collaborative environment, urging members to specific resources for technical resolutions.
Medical Model Publishing Dilemma: The decision whether to publicly share a preprint of a medical model in the midst of journal review sparked a discussion on the trade-offs of early disclosure. The conversation underscores the importance of strategic research dissemination in the field.
Open Calls for Developer Recognition and Business Collaboration: CHAI announced prizes for LLM developers, encouraging community contributions, whereas businesses were invited to share their applications of Axolotl confidentially, alluding to the value of real-world use-case narratives in furthering AI technology.
OpenRouter (Alex Atallah) Discord
- Midnight 70B Unleashed into the Roleplay Realm: The Midnight 70B model, tailored for storytelling and roleplay with lineage from Rogue Rose and Aurora Nights, is now up for grabs sporting a 25% discount, tagged at $0.009/1k tokens on OpenRouter.
- OpenRouter Refines Cost Tracking Tools: OpenRouter implements an advanced Usage Analytics feature for real-time cost tracking and has made a Billing Portal available for more efficient credit and invoice management.
- Noromaid Mixtral and Bagel Prices Readjusted: Prices for running the Noromaid Mixtral and Bagel models no longer include discounts, with new prices set at $0.008/1k tokens for Mixtral while Bagel comes at $0.00575/1k tokens.
- Claude 3 & Grok: In multi-model discussions, Claude 3âs self-moderated version gained traction for improved filtering and the Grok model generated debate; its performance deemed satisfactory against premium alternatives but cost-prohibitive. Users voiced preferences for longer context lengths and raised quality differences in model completions between OpenRouter and direct API usage.
- OpenRouter DDoS Sufferance and API Response Issue: OpenRouter faced a DDoS attack leading to service instability, since resolved, and users observed that citation data from Perplexity isnât provided in OpenRouterâs API responses as expected.
Eleuther Discord
-
PyTorch Struggles with MPS: An ongoing effort to improve the MPS backend in PyTorch is being tackled, with notable issues like tensor copying since September 2022 being a hurdle. This work is anticipated to enhance performance for model testing and finetuning locally.
-
Token Blocking Contested For LLM Training: A debate on token block strategies for language model pretraining triggered a discussion on the merits of overlapping vs. non-overlapping sequences and their impact on model efficacy, touching on the importance of beginning-of-sentence tokens.
-
AMD Driver Dilemma Spurs Discussion: Comparisons between AMD Radeon and Nvidia GPU drivers sparked debates, centering on driver inadequacies and the possibility of AMD open-sourcing their drivers. Some participants considered the potential for activist investor action to prompt change at AMD.
-
Machine Learning Model Merger Methodologies: New model merging methods are being created with the aim to outdo existing techniques such as DARE, though these are still in the experimental phase and require additional testing and validation.
-
New ML Architectures Brim with Promise: Innovations like DenseFormer and Zigzag Mamba suggest improvements in perplexity and diffusion model memory usage respectively, while DiPaCo offers a novel approach towards robust distributed model training.
-
SVM Kernel Conquest: Results inform that the sigmoid SVM kernel shows better performance on Pythiaâs input embeddings than other kernels such as rbf, linear, and poly.
-
N-gram Project âTokengramsâ Gains Traction: The Tokengrams project is now reportedly usable for efficiently computing and storing token n-grams from text corpora, suggesting an efficient resource available to researchers at GitHub - EleutherAI/tokengrams.
-
Chess-GPT Gets Analyzed: A case study on Chess-GPT discusses the technique of using language models to predict chess moves with an Elo rating estimation alongside validating computations using linear probes, detailed at Chess GPT Interventions.
-
Evaluation Variability Worries AI Engineers: The evaluation result inconsistencies when comparing Hugging Face transformers to Megatron-DeepSpeed evaluations has drawn attention, with suggestions to verify if implementation details like bfloat16 numeric handling in fused kqv multiplications might be contributing to variability.
-
Minecraft Serves As RL Testing Ground: A Minecraft-based environment for Reinforcement Learning, available on GitHub - danijar/diamond_env, alongside discussions on project Voyager, underscores the use of games for AI model collaboration research.
-
Multimodal Embedding Spaces Exploration: Interest in theoretical works on multimodal embedding spaces was raised, with the community providing insights on how Stable Diffusionâs subculture treats embeddings in line with IMG2IMG workflows.
CUDA MODE Discord
When Discord Fails, Meet Pushes Through: Technical difficulties during a GTC event led to the suggestion of defaulting to voice channels for future lectures due to screen sharing issues on Discord stage channels. An unsatisfied member proposed switching to Google Meet in future due to the instability of Discord streams.
CUDA-tious Profiling: For engineers delving into CUDA, a lecture on how to profile CUDA kernels in PyTorch was shared, complete with accompanying slides and a GitHub code repository. CUDA programming becomes a necessity when seeking performance gains where PyTorchâs speed is insufficient.
Triton Tricky Tidbits: Discussions around Tritonâs performance issues were prominent, and members were warned that Triton operations might be phased out in the future. A new prototype folder in the torchao
repository was proposed for collaboration on API design for efficient kernel usage, as support for Triton continues.
Sparsity Meets Decomposition Elegance: A novel approach to distributed sparse matrix multiplication was introduced in the Arrow Matrix Decomposition paper by researchers Lukas Gianinazzi and Alexandros Nikolaos Ziogas, with the implementation available on GitHub.
Blackwell GPUs Smile for the Camera: Members discussed the new Blackwell GPUs, highlighting a tweet with a humorous take on the GPUsâ smiley face pattern. Speculation on the unseen NVIDIA Developer Discord server took place after a GitHub discussion about the CUTLASS library was brought up. The community also touched on data type standardization in deep learning, noting the absence of Google in recent standard consortiums and the lack of an IEEE standard for new floating point numbers.
Interconnects (Nathan Lambert) Discord
Mistralâs New 7B Model Steals the Spotlight: Mistral AI casually dropped a new model, the Mistral 7B v0.2 Base, at the @cerebral_valley hackathon. The model details including fine-tuning guidance are available here, although no magnet links were provided for this release, as noted by @natolambert.
Shakeup at Stability AI: CEO Emad Mostaque resigned from Stability AI, hinting at his future focus on #DecentralizedAI. The community expressed mixed feelings about the impact and direction of his tenure, amidst discussions of internal struggles and the nature of Stability AIâs contributions to AI academia.
Nemo Interoperability Seekers: Questions arose about converting and wrapping Nemo checkpoints for compatibility with Hugging Face, underscoring the technical challenges in machine learning model interoperability.
AIâs Ethical Tightrope:
- Debates ignited over whether creating a âgeneralist agentâ in Reinforcement Learning is both practically and fundamentally feasible, based on discussions found here.
- The channel also tackled the FTCâs antitrust lawsuits against Apple with Nathan Lambert pointing out the publicâs misunderstanding of antitrust regulations and backing his views with supporting tweets.
Februaryâs Big AI Chats: Illuminating interviews with Anthropicâs CEO and Mistralâs CEO have been drawing attention, such as this âFireside Chatâ and the discussion on Amodeiâs AI industry predictions here. Additionally, Latent Spaceâs February recap, highlighting key AI developments, can be found here.
LangChain AI Discord
- AI Delivers More for Less: An innovative method was proposed to bypass the 4k output token limit of GPT-4-Turbo by initiating a follow-up request upon reaching the length limit, which allows the model to continue generating content seamlessly.
- Bedrock Meets Python: A guide has surfaced detailing the use of Bedrock with Python, showing practical integration techniques. Interested engineers can dive into the guide here.
- Analyze LLM Conversations with SimplyAnalyze.ai: The launch of SimplyAnalyze.ai was announced, which pairs with LangChain to dissect LLM dialogue across business divisions. To join the free developer preview, engineers can visit SimplyAnalyzeâs website.
- Harnessing LangChain for Decision-making: A post detailing the use of Langchain in Agent Tree Search was shared to foster more sophisticated decision-making processes with Language Models. Engineers can read more about it here.
- Upgraded Chatbot with Memory and Parsing Skills: Enhancements to a local character AI chatbot have been made, improving CSV and NER parsing, among other features. To check out the upgraded capabilities, the GitHub repository is available here.
LLM Perf Enthusiasts AI Discord
Real Estate Matching Gone Awry: A discussion unfolded around a problem with GPT4.Turbo misinterpreting property size requirements, with one property being suggested at 17,000 square feet despite a request for 2,000 - 4,000 square feet. A simple CSV-based database filter was recommended over a complex LLM, sparking a conversation about common missteps and linking to a resource by Jason Liu on the potential over-reliance on embedding search in LLMs.
Frustrations with Token Limitations: Participants voiced frustration with Anthropicâs rate limit of 1M tokens per day, considering a 200k context window to be insufficient. The Bedrock monthly fee model was discussed as a potential alternative, while a $500 scale plan from Anthropic was suggested as offering easier access for extensive use.
Seeking Superior Explainers: The community was asked for their top explainer resources on advanced LLM topics, with a specific call-out for high-quality, clear content on topics like RHLF, rather than a vast collection of blogs. Exa.ai was suggested as a beneficial resource for delving into LLM-related subjects.
Brief Cry for Coding Quality: In the #jobs channel, a user lamented the difficulty in writing high-quality code with a succinct and relatable one-liner.
GPT-3.5-0125 Takes the Lead: GPT-3.5-0125 was lauded for its significant performance improvements over previous models, as observed in a userâs comparative tests, elevating its status as a particularly advanced iteration within the realm of LLMs.
Alignment Lab AI Discord
-
Call for AI Avengers: The Youth Inquiry Network and Futracode are teaming up to develop a machine learning algorithm that recommends optimal research topics from existing databases. Theyâre recruiting web developers, data analysts, and AI & ML experts to champion this cause.
-
Contribute Your Skills for Glory: Volunteers will not only advance their careers with a shiny new portfolio piece but also walk away with a certificate and two professional recommendation letters. Those who help will also get to keep the developed ML algorithm code for personal or commercial use.
-
Flex Time for World Savers: They assure a flexible commitment for this groundbreaking projectâperfect for superheroes with a packed schedule. Recruits can bypass the bureaucratic labyrinth by dropping a simple âinterestedâ to get started on their mission.
-
Mystery Educational Reform Doc Drops: An unspecified member shares a Google Docs link discussing Post-AGI Educational Reforms, possibly hinting at a future-focused AI education paradigm.
-
Moment of Meta Moderation: In an ironic twist, a moderator experiences a self-epiphany of their own status in a call for moderation, reminding us that even bots can forget their protocols.
Datasette - LLM (@SimonW) Discord
-
LLM Versus Ollama Showdown: Members clarified that
llm
interfaces with models, such as Mistral, by setting up API endpoints, which are then executed byollama
.ollama
allows local model execution, making the models accessible via local HTTP API endpoints. -
Techie Commits with AI Assistance: The tool AICommits (GitHub - Nutlope/aicommits), designed to help write git commit messages with AI, gained appreciation for its utility, with requests for additional features such as emoji standards for commits.
Skunkworks AI Discord
- AI Cooking Up Stunts: An AI has crafted a unique cookbook based on YouTuber Mr. Beastâs daring adventures, which sparked interest in the group. The inventive application was showcased in a YouTube video, mixing culinary arts and machine learning for whimsical results.
- In Search of German Tech Savvy: A community member is on the lookout for German-language resources on deep learning and AI, indicating a desire to dive into technical content in their native tongue. The request painted a picture of a global, multilingual interest in the AI community.
PART 2: Detailed by-Channel summaries and links
Stability.ai (Stable Diffusion) â· #general-chat (1195 messagesđ„đ„đ„):
-
Stable Diffusion Inquiry and Assistance: Members discussed various aspects and uses of Stable Diffusion models, including performance, sampler settings, and ControlNet models. Users also exchanged guidance on handling errors and setting up the AI on different systems, especially with AMD GPUs.
-
Exploring SD3 and Alternatives: Conversation topics included the anticipated release window for SD3 and potential improvements, along with comparisons to other offerings like SDXL and AI-generated video potential.
-
Feedback on Online AI Services: The chat touched on the limitations and frustrations with online AI image generation services, such as those on Civitai and Suno, specifically pointing out issues with content restrictions and preferences on the type of content displayed.
-
Debate on AI Ethics and Regulation: Members debated the need for regulations on AI technology use and the importance of open-source models versus proprietary ones. Concerns were raised about regulations potentially stifling innovation and accessibility.
-
Technical Troubleshooting and Learning: New members seeking help with technical issues regarding model installation and use were directed to support channels and experts within the community. More experienced members aimed to guide and provide resources while advising newcomers on the learning curve associated with AI image generation.
Links mentioned:
- no title found: no description found
- ArtificialGuyBr: no description found
- no title found: no description found
- imgur.com: Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and ...
- Stable Cascade Examples: Examples of ComfyUI workflows
- Purity: no description found
- Emad Mostaque resigns as CEO of troubled generative AI startup Stability AI - SiliconANGLE: Emad Mostaque resigns as CEO of troubled generative AI startup Stability AI - SiliconANGLE
- thibaud (Thibaud Zamora): no description found
- Dune Oil GIF - Dune Oil - Discover & Share GIFs: Click to view the GIF
- Nvidia's newest cloud service promises to accelerate quantum computing simulations - SiliconANGLE: Nvidia's newest cloud service promises to accelerate quantum computing simulations - SiliconANGLE
- CinematicRedmond - Cinematic Model for SD XL - v1.0 | Stable Diffusion Checkpoint | Civitai: Cinematic.Redmond is here! I'm grateful for the GPU time from Redmond.AI that allowed me to make this model! This is a Cinematic model fine-tuned o...
- Artificialguybr Demo Lora - a Hugging Face Space by artificialguybr: no description found
- Neon City Lights | Suno: japanese vocals chill jazz j-pop downtempo song. Listen and make your own with Suno.
- Tweet from Christian Laforte (@chrlaf): @thibaudz Thanks Thibaud, as I wrote elsewhere, the plan hasn't changed, we are still hard at work improving the model towards open release. Including source code and weights.
- Ok Then Um GIF - Ok Then Um Well Ok Then - Discover & Share GIFs: Click to view the GIF
- Boring Reality - BoringReality_primaryV4.0 | Stable Diffusion LoRA | Civitai: NOTE: Please read below for working with these loras. They are unlikely to give good results when used individually and as is. This model is actual...
- ControlNet 1.1 Models - Tile (e) | Stable Diffusion Controlnet | Civitai: STOP! THESE MODELS ARE NOT FOR PROMPTING/IMAGE GENERATION These are the new ControlNet 1.1 models required for the ControlNet extension , converted...
- CinematicRedmond - Cinematic Model for SD XL - v1.0 | Stable Diffusion Checkpoint | Civitai: Cinematic.Redmond is here! I'm grateful for the GPU time from Redmond.AI that allowed me to make this model! This is a Cinematic model fine-tuned o...
- Reddit - Dive into anything: no description found
- Stable Diffusion Glossary - Stable Diffusion Art: Confused about a term in Stable Diffusion? You are not alone, and we are here to help. This page has all the key terms you need to know in Stable Diffusion.
- Reddit - Dive into anything: no description found
- Drone Shot "Above" XL LoRA - v1.0 | Stable Diffusion LoRA | Civitai: Use " above " in prompt. Works best for large scenes and not individual objects or characters.
- GitHub - virattt/financial-agent: A financial agent, built entirely with LangChain!: A financial agent, built entirely with LangChain! Contribute to virattt/financial-agent development by creating an account on GitHub.
- AI Dance Animation - [ NEXT GEN ] - Stable Diffusion | ComfyUI: This AI animation was done using AnimateDiff and ControlNet nodes without any girl LORAs. Since the last AI dance video it's a major improvement in consisten...
- Unauthorized âDavid Attenboroughâ AI clone narrates developerâs life, goes viral: "We observe the sophisticated Homo sapiens engaging in the ritual of hydration."
- artificialguybr (ArtificialGuy/JV.K): no description found
- ControlNetXL (CNXL) - bdsqlsz-depth | Stable Diffusion Checkpoint | Civitai: bdsqlsz : canny | depth | lineart-anime | mlsd-v2 | normal | openpose | recolor | segment | segment-v2 | sketch | softedge | t2i-color-shuffle | ti...
- Roko's Basilisk - LessWrong: Rokoâs basilisk is a thought experiment proposed in 2010 by the user Roko on the Less Wrong community blog. Roko used ideas in decision theory to argue that a sufficiently powerful AI agent would have...
- Who is going to tell herâŠ: Support the channel by grabbing a t-shirt: http://www.clownplanetshirts.comDonât forget to subscribe. Hit the bell to stay updated on the latest videos.Watch...
- How AI Image Generators Work (Stable Diffusion / Dall-E) - Computerphile: AI image generators are massive, but how are they creating such interesting images? Dr Mike Pound explains what's going on. Thumbnail image partly created by...
- GitHub - stitionai/devika: Devika is an Agentic AI Software Engineer that can understand high-level human instructions, break them down into steps, research relevant information, and write code to achieve the given objective. Devika aims to be a competitive open-source alternative to Devin by Cognition AI.: Devika is an Agentic AI Software Engineer that can understand high-level human instructions, break them down into steps, research relevant information, and write code to achieve the given objective...
- Reddit - Dive into anything: no description found
- Revenge of the nerds - John Goodman speech: Inspiring stuff!
- StabilityMatrix/README.md at main · LykosAI/StabilityMatrix: Multi-Platform Package Manager for Stable Diffusion - LykosAI/StabilityMatrix
- artificialguybr/cinematic.redmond â Run with an API on Replicate: no description found
- Eyaura: Give Me A Soul. Album: T.B.D.: G STRING - STEAM:https://store.steampowered.com/app/1224600/G_String/G STRING DISCORD - OFFICIAL:https://discord.gg/fUuDyx7uYeG STRING DISCORD - MISC:https:/...
- Name Archetypes: So, I made these faces in Midjourney by using the prompt "a photo of (name) --style raw." And I couldn't help but notice that whenever you meet a new person, they sorta have a name assoc...
- How to Generate Multiple Different Characters, Mix Characters, and/or Minimize Color Contamination | Regional Prompt, Adetailer, and Inpaint | My Workflow - 2. Adetailer | Stable Diffusion Workflows | Civitai: How to Generate Multiple Different Characters, Mix Characters, and/or Minimize Color Contamination | Regional Prompt, Adetailer, and Inpaint | My W...
- Forge Is Not Using ComfyUI as A Backend · lllyasviel/stable-diffusion-webui-forge · Discussion #169: Recently some people begin to spread misinformation about Forge using ComfyUI as a backend. This is false, harmful to the community, and harmful to the efforts of our engineering team. The backend ...
Unsloth AI (Daniel Han) â· #general (1009 messagesđ„đ„đ„):
-
Exploring Metrics in SFTTrainer: A user seeks advice on using a generation-based metric for validation with
SFTTrainer
, referencing a workaround in a GitHub issue. They are unclear about the preds received in thecompute_metrics
function and howSFTTrainer
computes loss when fine-tuning with LoRA adapters. -
Chatbot Model Inference Hardware Requirements: A user asks how to determine hardware requirements for running LLM models like Nous-Capybara-34B-GGUF, and another user suggests referring to another HH modelâs discussion for estimates, clarifying that model requirements can vary based on quantization and prompt.
-
Model Differences & Quantization: Inquiry made about the differences between two versions of Mistral models leads to an explanation that 4-bit models like here are faster to download but suffer a slight drop in accuracy.
-
Mistral and Their Marketing Strategy: Discussion unfolds around Mistralâs model release practices which are deemed unusual by a member due to not uploading their base models on Hugging Face and the unconventional leaking incident on 4chan.
-
Debate on Computer Science Education: A heated discussion takes place on the importance of a Computer Science degree in light of LLMs now capable of writing code. The conversation veers into various programming languages, memory safety, and the value of degrees from different universities.
Links mentioned:
- Inflection-2.5: meet the world's best personal AI: We are an AI studio creating a personal AI for everyone. Our first AI is called Pi, for personal intelligence, a supportive and empathetic conversational AI.
- GPT4All: Free, local and privacy-aware chatbots
- Google Colaboratory: no description found
- 152334H/miqu-1-70b-sf · Hugging Face: no description found
- Hugging Face â The AI community building the future.: no description found
- Funny Very GIF - Funny Very Sloth - Discover & Share GIFs: Click to view the GIF
- Sloth Slow GIF - Sloth Slow Stamp - Discover & Share GIFs: Click to view the GIF
- Sloth Smile Slow GIF - Sloth Smile Slow Smooth - Discover & Share GIFs: Click to view the GIF
- mistralai/Mistral-7B-Instruct-v0.2 · Hugging Face: no description found
- unsloth/mistral-7b-v0.2-bnb-4bit · Hugging Face: no description found
- unsloth/mistral-7b-v0.2 · Hugging Face: no description found
- TheBloke/CodeLlama-34B-Instruct-GGUF · [AUTOMATED] Model Memory Requirements: no description found
- TheBloke/Nous-Capybara-34B-GGUF · Hugging Face: no description found
- Generation - GPT4All Documentation: no description found
- GitHub - InflectionAI/Inflection-Benchmarks: Public Inflection Benchmarks: Public Inflection Benchmarks. Contribute to InflectionAI/Inflection-Benchmarks development by creating an account on GitHub.
- How Quickly Do Large Language Models Learn Unexpected Skills? | Quanta Magazine: A new study suggests that so-called emergent abilities actually develop gradually and predictably, depending on how you measure them.
- White House urges developers to dump C and C++: Biden administration calls for developers to embrace memory-safe programing languages and move away from those that cause buffer overflows and other memory access vulnerabilities.
- Compute metrics for generation tasks in SFTTrainer · Issue #862 · huggingface/trl: Hi, I want to include a custom generation based compute_metrics e.g., BLEU, to the SFTTrainer. However, I have difficulties because: The input, eval_preds, into compute_metrics contains a .predicti...
- alpindale/Mistral-7B-v0.2-hf · Hugging Face: no description found
- GAIR/lima · Datasets at Hugging Face: no description found
- Accelerating Large Language Models with Mixed-Precision Techniques - Lightning AI: Training and using large language models (LLMs) is expensive due to their large compute requirements and memory footprints. This article will explore how leveraging lower-precision formats can enhance...
- ISTA-DASLab ( IST Austria Distributed Algorithms and Systems Lab): no description found
- unilm/bitnet/The-Era-of-1-bit-LLMs__Training_Tips_Code_FAQ.pdf at master · microsoft/unilm: Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities - microsoft/unilm
- aphrodite-engine/tests/benchmarks at main · PygmalionAI/aphrodite-engine: PygmalionAI's large-scale inference engine. Contribute to PygmalionAI/aphrodite-engine development by creating an account on GitHub.
- GitHub - PygmalionAI/aphrodite-engine: PygmalionAI's large-scale inference engine: PygmalionAI's large-scale inference engine. Contribute to PygmalionAI/aphrodite-engine development by creating an account on GitHub.
- Watch movies online and Free tv shows streaming - ev01.net: Fast and Free streaming of over 250000 movies and tv shows in our database. No registration, no payment, 100% Free full hd streaming
- argilla (Argilla): no description found
- GitHub - ggerganov/llama.cpp: LLM inference in C/C++: LLM inference in C/C++. Contribute to ggerganov/llama.cpp development by creating an account on GitHub.
Unsloth AI (Daniel Han) â· #random (58 messagesđ„đ„):
-
Kernel Conundrum on Local Machine: A discussion around an issue requiring kernel restarts on userâs local machine, hints it might be memory-related. An error message about 32-bit and needing to restart to get rid of it was shared, with some discussion about possibly being Out Of Memory (OOM), but user confirms the machine works fine after kernel restarts.
-
Warming Up to Fiber Optics and Big Models: One user excitedly reports upgrading to fiber-optics and a new 2TB WD black edition to support larger models. There is enthusiasm for the current performance and potential future upgrades to hardware.
-
ORPO Generates Buzz in AI Community: Thereâs excitement around ORPO (Off-policy Reinforcement learning with Pretrained Overparametrized Models), as users discuss its integration and boost in performance for models. The link to the original paper on arXiv was provided.
-
Unsloth Keeps Up with TRL: In relation to the ORPO discussion, users confirmed that Unsloth AI should support it if itâs supported by TRL (Transformer Reinforcement Learning). Optimizations and patching from Unsloth to TRL were mentioned, along with encouragement to share if there are any issues with the new integrations.
-
New Toolkit for Transformer Models: An interesting toolkit for transformers called transformer-heads was linked. Itâs designed for attaching, training, saving, and loading new heads for transformer models, available on GitHub.
Link mentioned: GitHub - center-for-humans-and-machines/transformer-heads: Toolkit for attaching, training, saving and loading of new heads for transformer models: Toolkit for attaching, training, saving and loading of new heads for transformer models - center-for-humans-and-machines/transformer-heads
Unsloth AI (Daniel Han) â· #help (317 messagesđ„đ„):
- Understanding Unslothâs Fast Dequantizing: Unsloth AIâs âfast_dequantizeâ in
fast_lora.py
is noted to be optimized for speed with reduced memory copies compared tobitsandbytes
. - Troubleshooting and Updating Mistral with Unsloth: A member was advised to upgrade Unsloth due to issues with Gemma GGUF, with a command provided for the upgrade. It was noted that problems existed not only with GGUF but also with merging.
- Resolving Inference Issue with Looping Tokens: Discussion on a reported issue where models converted to gguf, particularly using Mistral, started repeating
<s>
in a loop during responses. Unslothâs maintainer suggests checking thetokenizer.eos_token
. - Combining Multiple Datasets for Fine-Tuning: Itâs suggested that multiple datasets can be concatenated into one text string, processed, and then appended together for training. Enhanced instructions and responses from different datasets can potentially be merged for this purpose.
- Needing Clarity on Fine-Tuning Parameters: Queries were made about controlling epochs with
max_steps
during fine-tuning, for which settingnum_train_epochs
instead was recommended. Additionally, itâs mentioned higher memory consumption may result from increasingmax_seq_length
due to padding.
Links mentioned:
- Mixture of Experts for Clowns (at a Circus): no description found
- HirCoir/Claud-mistral-7b-bnb-4bit-GGUF at main: no description found
- HirCoir/Claud-openbuddy-mistral-7b-v19.1-4k · Hugging Face: no description found
- mistralai/Mistral-7B-Instruct-v0.2 · Hugging Face: no description found
- Google Colaboratory: no description found
- Home: 2-5X faster 70% less memory QLoRA & LoRA finetuning - unslothai/unsloth
- bitsandbytes/bitsandbytes/autograd/_functions.py at main · TimDettmers/bitsandbytes: Accessible large language models via k-bit quantization for PyTorch. - TimDettmers/bitsandbytes
- unilm/bitnet/The-Era-of-1-bit-LLMs__Training_Tips_Code_FAQ.pdf at master · microsoft/unilm: Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities - microsoft/unilm
- Fine tuning LLMs for Memorization: âĄïž ADVANCED-fine-tuning Repo (incl. Memorization Scripts): https://trelis.com/advanced-fine-tuning-scripts/âĄïž One-click Fine-tuning & Inference Templates: ht...
- GitHub - unslothai/unsloth: 2-5X faster 70% less memory QLoRA & LoRA finetuning: 2-5X faster 70% less memory QLoRA & LoRA finetuning - unslothai/unsloth
- GitHub - unslothai/unsloth: 2-5X faster 70% less memory QLoRA & LoRA finetuning: 2-5X faster 70% less memory QLoRA & LoRA finetuning - unslothai/unsloth
- GitHub - unslothai/unsloth: 2-5X faster 70% less memory QLoRA & LoRA finetuning: 2-5X faster 70% less memory QLoRA & LoRA finetuning - unslothai/unsloth
Unsloth AI (Daniel Han) â· #showcase (33 messagesđ„):
- Hints of Troubles with Unsloth Integration: Users express issues with using Ollama templates, particularly with different quant models such as q4, leading to poor results.
- Diagnostics on GPT4All: One user is running tests on GPT4All to resolve issues and is advised not to escape backticks and to try different quant sizes.
- Q8 Model Version Shows Promise: After a bit of back-and-forth, a user confirms that the Q8 model on Huggingface seems to be functioning correctly.
- Sappha-2b-v3 Makes Waves: A new model, sappha-2b-v3, which is fine-tuned with Unsloth on Gemma-2b, outperforms current models on several benchmarks, prompting discussions on its capability.
- Interest Peaks for New Models: Users show excitement for the newly released models and share links to their best-performing models, such as MasherAI-v6-7B, while seeking information on the fine-tuning process used.
Links mentioned:
- mahiatlinux/MasherAI-v6-7B · Hugging Face: no description found
- Fizzarolli/sappha-2b-v3 · Hugging Face: no description found
- ollama/docs/modelfile.md at main · ollama/ollama: Get up and running with Llama 2, Mistral, Gemma, and other large language models. - ollama/ollama
Unsloth AI (Daniel Han) â· #suggestions (29 messagesđ„):
- PyTorch Full Finetuning Without Breaking the (Memory) Bank: PyTorch users take note: a new pull request allows full finetuning to fit into less than 16GB of RAM, making it more accessible for those with consumer-grade GPUs.
- LoftQ Hits the Scene: In artificial intelligence optimization news, LoftQ has been included in the Hugging Face PEFT release v0.10.0, which brings enhanced fine-tuning capabilities to larger models.
- Multi-Turn Training Challenge for ORPO: Thereâs a discussion on multi-turn dialogue training for ORPO, with suggestions to resolve the current limitations of using the (prompt:"", chosen:"", rejected:"") format by introducing a more efficient method that handles multiple turns effectively.
- ORPO Needs Better Multi-Turn Training: The community expresses concerns that the current ORPO method doesnât seem to cater well to multi-turn dialogue training, which is essential for ORPO to be a feasible replacement for SFT, highlighting the importance of a standardized and optimized format for dialogue training.
- Successful ORPO Trials with Mistral: One member boasts impressive results when applying ORPO TRL implementation on the Mistral-7B-v0.2 model, using the argilla/ultrafeedback-binarized-preferences-cleaned dataset, suggesting that further tuning might yield even better outcomes.
Links mentioned:
- Release v0.10.0: Fine-tune larger QLoRA models with DeepSpeed and FSDP, layer replication, enhance DoRA · huggingface/peft: Highlights Support for QLoRA with DeepSpeed ZeRO3 and FSDP We added a couple of changes to allow QLoRA to work with DeepSpeed ZeRO3 and Fully Sharded Data Parallel (FSDP). For instance, this allow...
- Full finetune < 16GB by rohan-varma · Pull Request #527 · pytorch/torchtune: Context We'd like to enable a variant of full finetune that trains in < 16GB of RAM for users with consumer grade GPUs that have limited GPU RAM. This PR enables the full finetune to fit into ...
Perplexity AI â· #general (892 messagesđ„đ„đ„):
- Pro Users Debate Perplexityâs Image Generation: Users discussed the ability to generate images using Perplexity as a Pro feature, noting that it requires using Writing mode and switching off the Pro toggle on the web version.
- Claude Opus & GPT-4 Turbo Differences: Conversations centered around the functionality of Claude 3 Opus and GPT-4 Turbo models, comparing their abilities for academic research and writing code, and the distinction of GPT-4 Turbo being able to compile Python files which Perplexity does not currently support.
- Exploring Stability AI and Local Models: Talk of Stability AIâs models like SDXL and local installations was a focus, with users sharing tips and experiences about running these powerful image generation tools on personal hardware, despite the high costs involved.
- Investigating Perplexity Bugs and Confusions: Users expressed confusion about certain Perplexity features, such as repeated unrelated prompts appearing during sessions, how to disable unnecessary search triggers when using certain AI models, and issues encountered on the iOS app.
- Perplexity Features and Updates Discussion: Users debated the potential of features like Claude 3 Opusâs API capabilities, Op1 synthesizerâs aesthetics and new models like Rabbit R1, and discussed the possibility of integrating Perplexity with iOS Spotlight search.
Links mentioned:
- Imagine with Meta AI: Use Imagine with Meta AI to quickly create high-resolution, AI-generated images for free. Just describe an image and Meta AI will generate it with technology from Emu, our image foundation model.
- MTEB Leaderboard - a Hugging Face Space by mteb: no description found
- Stability AI Image Models â Stability AI: Experience unparalleled image generation capabilities with SDXL Turbo and Stable Diffusion XL. Our models use shorter prompts and generate descriptive images with enhanced composition and realistic ae...
- Civitai | Share your models: no description found
- Swedish House Mafia One GIF - Swedish House Mafia One Op1 - Discover & Share GIFs: Click to view the GIF
- Iota Crypto GIF - Iota Crypto Cryptocurrency - Discover & Share GIFs: Click to view the GIF
- Claude 2.1 prompt engineering guide: Learn how to prompt Claude with these 11 prompt engineering tips.
- Introducing deep search | Search Quality Insights: no description found
- You can now use Copilot's GPT-4 Turbo model for free: Microsoft has just made the advanced GPT model available for everyone, with no catches or tricks.
- Stability AI Announcement â Stability AI: Earlier today, Emad Mostaque resigned from his role as CEO of Stability AI and from his position on the Board of Directors of the company to pursue decentralized AI. The Board of Directors has appoin...
- Have you ever had a dream like this?: We all have at one point.
- Kys Wojak GIF - Kys Wojak Mushroom - Discover & Share GIFs: Click to view the GIF
- PerplexityBot: no description found
- Reddit - Dive into anything: no description found
- Golden Eggs Willy Wonka And The Chocolate Factory GIF - Golden Eggs Willy Wonka And The Chocolate Factory Clean The Eggs - Discover & Share GIFs: Click to view the GIF
- GitHub - OpenAccess-AI-Collective/axolotl: Go ahead and axolotl questions: Go ahead and axolotl questions. Contribute to OpenAccess-AI-Collective/axolotl development by creating an account on GitHub.
- âOrion Browser by Kagi: âOrion is a fast, free, web browser for iPhone and iPad with no ads, no telemetry. Check also Orion for your Mac desktop. Orion has been engineered from ground up as a truly privacy-respecting browse...
- DOJ sues Apple over iPhone monopoly in landmark antitrust case: Apple and its iPhone and App Store business have been eyed by the Department of Justice, which previously filed antitrust suits against Google.
Perplexity AI â· #sharing (38 messagesđ„):
- Legal Battle Ahead: The United States is engaged in a lawsuit, details of which can be explored at United States Sues.
- Email Authentication Scrutiny: Users interested in email security protocols, specifically DMARC, can find information at DMARC Details.
- Decoding Market Tools: TradingView, a tool for traders and investors, is discussed and can be examined at TradingView Insights.
- Generating Curiosity Around Perplexity: The potential of Perplexity AI replacing other tools is being questioned, and insights can be discovered at Should Perplexity Replace?.
- The Concept of Love Explored: An inquiry into the nature of love has been raised, with a desire to understand more at What is Love?.
Perplexity AI â· #pplx-api (24 messagesđ„):
- Token Limit Troubles: A member encountered a BadRequestError due to a prompt and token generation request exceeding the 16,384 token limit. They were instructed to reduce the
max_tokens
value to stay under the limit and to consider shortening their prompts. - Resume Analyzer Development: The member shared they are working on a resume analyzer/builder project as a way to practice with AI, indicating they are new to the field.
- Seeking Token Count Clarity: In response to a question about limiting user prompts by token count, it was explained that the number of tokens is tied to the length of the message sent. They were referred to Perplexity AIâs documentation for an accurate token count.
- Tokenization Tool Tip: Another member recommended using OpenAIâs tokenizer tool as a general gauge for token count, but noted that different AI models may tokenize differently. For precision, they advised checking token usage directly through the Perplexity API.
- In Search of an Autogpt-like Service: A member inquired about an autogpt-like service that supports Perplexity API keys for automating iterative tasks. There were no responses provided to this query within the summarized message history.
Link mentioned: Chat Completions: no description found
LM Studio â· #đŹ-general (533 messagesđ„đ„đ„):
- High CPU Usage Queries: Users noticed high CPU usage when running models on LM Studio version 0.2.17; some resolved issues by restarting LM Studio and resetting settings to default. Advice for resolving these issues includes reviewing log files for errors.
- GPU Compatibility Concerns: There were inquiries about the best GPU cards for LM Studio, with discussions revealing preference for Nvidia graphics cards like the 3090 TI for better compatibility and performance. Users also discussed various issues regarding GPU offload and the impact of different model file sizes on performance.
- Local Server Accessibility: Users encountered errors using LM Studioâs local server feature, with successful resolutions involving reinstallation and proper configurations; users are prompted to post error reports in a specific Discord channel (#1139405564586229810) for assistance.
- Model Format Support: Discussions indicated that LM Studio supports GGUF model format, and users explored how to convert Hugging Face models to GGUF format using command-line methods and the importance of sharing converted models back on Hugging Face.
- Linux and MacOS Support: Users inquired about using LM Studio on Linux and MacOS platforms. There is no immediate plan for a docker image or an Intel Mac version of LM Studio, but users are encouraged to vote for this feature on the feature request channel (#1128339362015346749).
Links mentioned:
- no title found: no description found
- no title found: no description found
- LM Studio: no description found
- roborovski/superprompt-v1 · Hugging Face: no description found
- Get started | đŠïžđ Langchain: LCEL makes it easy to build complex chains from basic components, and
- bartowski/c4ai-command-r-v01-GGUF · Hugging Face: no description found
- Ban Keyboard GIF - Ban Keyboard - Discover & Share GIFs: Click to view the GIF
- Making LLMs lighter with AutoGPTQ and transformers: no description found
- GitHub - czkoko/SD-AI-Prompt: A shortcut instruction based on LLama 2 to expand the stable diffusion prompt, Power by llama.cpp.: A shortcut instruction based on LLama 2 to expand the stable diffusion prompt, Power by llama.cpp. - czkoko/SD-AI-Prompt
- Run ANY Open-Source LLM Locally (No-Code LMStudio Tutorial): LMStudio tutorial and walkthrough of their new features: multi-model support (parallel and serialized) and JSON outputs. Join My Newsletter for Regular AI Up...
- The unofficial LMStudio FAQ!: Welcome to the unofficial LMStudio FAQ. Here you will find answers to the most commonly asked questions that we get on the LMStudio Discord. (This FAQ is community managed). LMStudio is a free closed...
- Caravan Palace - Lone Digger (Official MV): đ Preorder our new album: https://caravanpalace.ffm.to/gmclubđ« Come see us live: http://www.caravanpalace.com/tour đ Subscribe to our channel and click th...
- GitHub - stitionai/devika: Devika is an Agentic AI Software Engineer that can understand high-level human instructions, break them down into steps, research relevant information, and write code to achieve the given objective. Devika aims to be a competitive open-source alternative to Devin by Cognition AI.: Devika is an Agentic AI Software Engineer that can understand high-level human instructions, break them down into steps, research relevant information, and write code to achieve the given objective...
- Caravan Palace - Lone Digger (Official MV): đ Preorder our new album: https://caravanpalace.ffm.to/gmclubđ« Come see us live: http://www.caravanpalace.com/tour đ Subscribe to our channel and click th...
- GitHub - caddyserver/caddy: Fast and extensible multi-platform HTTP/1-2-3 web server with automatic HTTPS: Fast and extensible multi-platform HTTP/1-2-3 web server with automatic HTTPS - caddyserver/caddy
- Tutorial: How to convert HuggingFace model to GGUF format · ggerganov/llama.cpp · Discussion #2948: Source: https://www.substratus.ai/blog/converting-hf-model-gguf-model/ I published this on our blog but though others here might benefit as well, so sharing the raw blog here on Github too. Hope it...
- Dev:API â YaCyWiki: no description found
- Web Harvesting for Data Mining - YaCy Searchlab: no description found
- Home - YaCy: no description found
- nisten/mistral-instruct0.2-imatrix4bit.gguf · Hugging Face: no description found
- Tweet from meng shao (@shao__meng): They just change the readme of HuggingFace space: Mistral-7B-Instruct-v0.2 based on Mistral-7B-Instruct-v0.1 => Mistral-7B-v0.2 https://twitter.com/shao__meng/status/1771680453210370157 âïž Quotin...
- George Hotz | Exploring | finding exploits in AMD's GPU firmware | Giving up on AMD for the tinybox: Date of the stream 21 Mar 2024.from $1050 buy https://comma.ai/shop/comma-3x & best ADAS system in the world https://openpilot.comma.aiLive-stream chat added...
LM Studio â· #đ€-models-discussion-chat (71 messagesđ„đ„):
- Q-AGI Skepticism Strikes*: A discussion about Q-AGI videos led to expressions of fatigue and refusal to consume more content on the topic, with a humorous meme shared to underline the sentiment.
- AI in Architecture Needs Human Verification: Dialogue regarding the use of AI in architectural engineering highlighted skepticism about trusting AI models without human oversight, citing the need for human certification due to legal and safety concerns.
- Fine-Tuning Models for Specific Tasks: Members discussed the effectiveness of fine-tuning smaller language models for specific tasks. One member shared their creation of a program that lets users train their own models based on ChatGPT 2, complete with an instruction manual generated by Claude.
- Understanding Model Size vs. Quantization: There was clarification provided about the difference between #b (size of the model based on parameters) and q# (level of quantization) when deciding which model version to run, such as âllama 7b - q8â versus âllama 13b - q5â.
- Choosing the Right Model for Context Length: A user inquired about models with 32K context length for RAG-adjacent interactions, and it was mentioned that Mistral recently released a 7b 0.2 version with a 32k context.
Links mentioned:
- The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits: Recent research, such as BitNet, is paving the way for a new era of 1-bit Large Language Models (LLMs). In this work, we introduce a 1-bit LLM variant, namely BitNet b1.58, in which every single param...
- How to Fine-Tune LLMs in 2024 with Hugging Face: In this blog post you will learn how to fine-tune LLMs using Hugging Face TRL, Transformers and Datasets in 2024. We will fine-tune a LLM on a text to SQL dataset.
- Dont Say That Ever Again Diane Lockhart GIF - Dont Say That Ever Again Diane Lockhart The Good Fight - Discover & Share GIFs: Click to view the GIF
- Models - Hugging Face: no description found
LM Studio â· #đ§ -feedback (17 messagesđ„):
- Request for Download Speed Limiter: A user requested the addition of a download speed limiter to avoid consuming all the bandwidth at home. Other users suggested using system-level settings to throttle speeds, arguing that large downloads are common across many applications.
- Confusion Over Image Uploading: One user struggled to figure out how to upload images to a model. Guidance was provided by others, mentioning a tool called .mmproj converter and directing where to find it and how to use it.
- Llama Image Input Issues on 0.2.17: A user was advised to use version 0.2.16 rather than 0.2.17 of a tool due to issues with image inputs on the newer version. However, a follow-up clarified that version 0.2.16 for Linux was skipped, and version 0.2.14 worked well for llava vision models except for moondream2.
- Problems with âcontext_window Setting: A user raised an issue with the âcontext_window setting when using LM Studio, mentioning it only works with the default setting. No direct solution was provided in the message history.
- Moving Discussions to Relevant Channels: A user was instructed to move a technical discussion to a more appropriate channel specifically dedicated to such topics.
Link mentioned: nisten/obsidian-3b-multimodal-q6-gguf · Hugging Face: no description found
LM Studio â· #đ-docs-and-tips (2 messages):
- Launch of New Docs: LM Studio has unveiled its new documentation website which can be accessed at lmstudio.ai/docs.
- Navigating Multi Model Sessions: To understand the new Multi Model Session feature or JSON Mode, users can watch a tutorial video from 5:57 which provides instructions and insights on their usage.
Links mentioned:
- Run ANY Open-Source LLM Locally (No-Code LMStudio Tutorial): LMStudio tutorial and walkthrough of their new features: multi-model support (parallel and serialized) and JSON outputs. Join My Newsletter for Regular AI Up...
- Documentation | LM Studio: Technical Reference
LM Studio â· #đ-hardware-discussion (228 messagesđ„đ„):
-
Distributed AI - A Networked Dream or Practical Scheme?: An extended discussion took place regarding the feasibility of networked machines running language models collaboratively, akin to a mini-distributed computing environment. There was skepticism about the practicability due to latency and bandwidth constraints, but examples like HyperspaceAIâs approach and methods such as DHT (distributed hash table) were cited for potential inspiration.
-
VRAM Large and in Charge: Price and performance comparisons between high VRAM GPUs such as the RTX 3090 with its 24GB of VRAM were discussed for running AI models. There was a general consensus that, for the purpose of local machine learning, GPUs like the RTX 3090 offer the best value for VRAM capacity.
-
New Horizons in Local Compute: The conversation touched on the prospects of LLMs (large language models) shifting computing demands back to local infrastructure much like the Linux/LAMP movement of the â90s. The parallel was made between the potential growth in LLM development and deployment and past tech revolutions that demanded significant grassroots technical expertise.
-
Macs and Memory - Appleâs Big VRAM Offer: Speculation around the RAM capacity of future Mac models, specifically the M3 Ultra Studio, was discussed with expectations it might allow for at least 256GB (V)RAM. Current models like the M3 and M2 Ultra Mac Studio were highlighted for their substantial combined system and GPU RAM, capable of performing high VRAM computations.
-
Hybrid GPU Setups - The Plot Thickens: An attempted setup with an AMD 7800XT and NVIDIA 3060 in the same PC resulted in initialization errors with LM Studio software, prompting a conversation about the challenges of running a Frankenstein build. The broader implications of how to approach building rigs with multiple high VRAM GPUs for running large AI models were also touched upon.
Links mentioned:
- EvalPlus Leaderboard: no description found
- Hackers can read private AI-assistant chats even though theyâre encrypted: All non-Google chat GPTs affected by side channel that leaks responses sent to users.
- MemGPT Explained!: Thank you so much for watching our paper summary video on MemGPT! MemGPT is a super exciting new work bridging together concepts in how Operating Systems man...
- GitHub - intel/intel-npu-acceleration-library: IntelÂź NPU Acceleration Library: IntelÂź NPU Acceleration Library. Contribute to intel/intel-npu-acceleration-library development by creating an account on GitHub.
- GitHub - pytorch-labs/gpt-fast: Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.: Simple and efficient pytorch-native transformer text generation in <1000 LOC of python. - pytorch-labs/gpt-fast
LM Studio â· #đ§Ș-beta-releases-chat (16 messagesđ„):
- Reboot Solves Mysterious Output Issues: A user reported that after experiencing a decrease in output quality and models outputting gibberish, a restart of LM Studio resolved the issue.
- Older AMD GPUs Unsupported by ROCM Build: A user faced an error while loading models on LM Studio with a RX 570, which was clarified by another user stating that the RX570 is too old to work with the ROCM build.
- Server Output Abruptly Stops: Multiple members discussed an issue where the server stops after outputting only 2 tokens. Suggestions to troubleshoot included sharing logs and trying out a âhello world (curl)â example.
- Lengthy Sessions Lead to Garbled Output: One member experienced garbled output during prolonged sessions with LM Studio, which seems to relate to a multiple of the token count but persists despite the rolling window approach to manage context.
- Guidance Offered for Managing Token Limits: It was noted that halving tokens could cause issues since tokens are not equivalent to words and may result in incomplete phrases affecting the modelâs response. The suggestion was to replicate the experiment on the API server for better logging and analysis.
LM Studio â· #autogen (48 messagesđ„):
- Trouble with Large Language Models: One member mentioned a limitation in running more than one LLM due to an 8GB VRAM cap, suggesting possible configurations to mitigate this limit.
- Token Troubles with Autogen: There were multiple reports of issues with Autogen, including unexpected 4-token limits outside local environments, which led to confusion among users attempting remote server setups.
- Workflow Woes: Users experienced frustration with Autogen workflows, with some resorting to manual edits of workflow files to adjust the
max_tokens
parameter to -1, as suggested in the thread. - Autogen Studio UX Struggles: Members discussed the user experience of Autogen Studio, pointing out less-than-intuitive UI and the need for improved error messaging and model loading indicators.
- Community Collaboration on Autogen: The discussion showed members actively helping each other troubleshoot issues with Autogen Studio, emphasizing community-driven problem solving and knowledge sharing.
LM Studio â· #langchain (1 messages):
pradeep1148: https://www.youtube.com/watch?v=Nc5Yk0XXgP8
LM Studio â· #memgpt (4 messages):
- Windows Compatibility Confusion for MemGPT: A participant expressed difficulty in running MemGPT on Windows, finding the process complicated.
- Check the User Guide: In response, another member suggested reviewing the user guide for assistance.
- Potential Linux Exclusive: The same member then speculated that the issue might be because MemGPT is a Linux-only application.
- WSL as a Solution: A practical solution offered was to use Windows Subsystem for Linux (WSL) to overcome the Windows compatibility issue.
LM Studio â· #avx-beta (3 messages):
- AVX Beta Version Update Uncertain: A member inquired about the possibility of an update to the 0.2.10 avx beta version. Another stated that supporting older hardware is not a high priority at the moment and updates may eventually come once more pressing issues are addressed.
LM Studio â· #amd-rocm-tech-preview (26 messagesđ„):
-
ZLUDA vs ROCm Compatibility Issues: A user reported 100% CPU utilization after installing ZLUDA alongside ROCm, suggesting interference or prioritization issues between the two. The discussion pointed toward a potential conflict with having ZLUDA in the path instead of ROCm, affecting performance.
-
ROCM Loading Woes: A user encountered a persistent âExit code: 42â error when attempting to load models over 10GB with ROCm offloading enabled, even with ample VRAM on an rx6950xt. The models would load without GPU offloading, albeit slowly, indicating potential issues with the offloading process.
-
Path to Resolution?: It was proposed that users check their User PATH environment variable for entries ending in âROCm/binâ to ensure proper connection to ROCm libraries. One user reported an absence of ROCm-related paths in their environment variables, possibly contributing to the issues.
-
Environment Variable Guidance Shared: To help with ROCm loading errors, a user suggested adding a specific path to the User PATH variable:
C:\Users\[username]\AppData\Local\LM-Studio\app-0.2.17\resources\app\.webpack\main\build\Release\ROCm\bin
This path is intended to assist in connecting to ROCm libraries for those experiencing the loading error.
Link mentioned: How to see names and values of environment variables in Windows 10: In this article, we will see how to view environment variables defined in Windows 10 and their values for the current user and the system variables.
LM Studio â· #open-interpreter (23 messagesđ„):
- Troubleshooting Interpreter Connection Issues: A user faced an error with a local Mistral model stating âModel with key âgpt-4â not loaded.â After some back-and-forth discussion, making a simple
curl
request to the server fixed the issue, but the cause remained unclear. - Local LLM Recommendations for Open-interpreter: Users discussed various LLM options for use with Open-interpreter: CodeLlama 7B Instruct - GGUF gave correct answers for sample questions, while Mistral 7B Instruct v0.2 - GGUF underperformed. Another recommendation was for Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-GGUF, which was noted for verbosity.
- Unboxing Open Interpreterâs Hype: Users shared views on a YouTube promotional video for Open Interpreter, with one user unable to order due to location constraints. Another shared that printing the device oneself is possible, with STL files available for free on 01.openinterpreter.com/bodies/01-light.
- Exploring GGUF-Compatible Models: Users provided links to various models on Hugging Face, including CodeLlama 7B Instruct and Mistral 7B Instruct, formatted in GGUF, discussing their performance when tested with Open-interpreter.
- Handling Non-Blessed Model Errors: A user resolved errors encountered with non-blessed models by modifying the default system message, linking to issue #1124 on open-interpreterâs GitHub.
Links mentioned:
- 01 Light - 01: no description found
- saltlux (saltlux): no description found
- Nan-Do/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-GGUF · Hugging Face: no description found
- TheBloke/CodeLlama-7B-Instruct-GGUF · Hugging Face: no description found
- Open Source AI Agents STUN the Industry | Open Interpreter AI Agent + Device (01 Light ) is out!: đ© My 5 Minute Daily AI Brief đ©https://natural20.beehiiv.com/subscribeđ„ Follow Me On Twitter (X) đ„https://twitter.com/WesRothMoneyLINKS:https://www.openin...
- bug: `markdown` disabled or not supported. · Issue #1124 · OpenInterpreter/open-interpreter: Describe the bug When prompting a local model, https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF, using LM Studio, I kept getting what should have been valid python output, but the code bl...
- TheBloke/Mistral-7B-Instruct-v0.2-GGUF · Hugging Face: no description found
OpenInterpreter â· #general (362 messagesđ„đ„):
-
Open Interpreter Hits Ubuntu: Discussions indicate users are managing to get Open Interpreter running on Ubuntu 22.04, with some tweaking around microphone support and client understanding. Theyâve expressed a need to better understand the client-server operations, seeking insight from the community.
-
Hardware Hacks and Hopes: The community is actively seeking information on building the O1 light and is interested in hacking together hardware for the project. A new dedicated Discord channel called <#1221879849535279204> was created due to high demand.
-
Excitement and Queries About 01 Device: Users are excited about trying and potentially building around the O1 device, asking questions about its capabilities, the need for a subscription, Windows compatibility, eSIM possibilities for 4G connectivity, and if thereâs a UI for ease of use.
-
Tech Support Queries: Members are troubleshooting various technical issues related to the Open Interpreter, from excessive AI chattering with the
--os
option to utilizing different LLMs and integrating with various APIs like Groq. Thereâs a focus on enhancing the installerâs user-friendliness. -
Contribution and Community Growth: Thereâs eagerness within the community to contribute to Open Interpreter, with users discussing front-end app development, potential integration with Groq, the performance of different LLMs, and a desktop app for Apple silicon devices. The community supports each otherâs ideas and open-source efforts.
Links mentioned:
- no title found: no description found
- GOODY-2 | The world's most responsible AI model: Introducing a new AI model with next-gen ethical alignment. Chat now.
- OpenAI - Open Interpreter: no description found
- Tweet from killian (@hellokillian): ..jesus open interpreter's first vision model, piloting my 8gb M1 macbook. 100% offline. this will be inside every computer in the world.
- All Settings - Open Interpreter: no description found
- Tweet from Open Interpreter (@OpenInterpreter): https://twitter.com/i/spaces/1dRJZEPewmgGB
- Providers | liteLLM: Learn how to deploy + call models from different providers on LiteLLM
- Groq | liteLLM: https://groq.com/
- open-interpreter/interpreter/core/llm/llm.py at 3e95571dfcda5c78115c462d977d291567984b30 · OpenInterpreter/open-interpreter: A natural language interface for computers. Contribute to OpenInterpreter/open-interpreter development by creating an account on GitHub.
- open-interpreter/interpreter/terminal_interface/utils/count_tokens.py at main · OpenInterpreter/open-interpreter: A natural language interface for computers. Contribute to OpenInterpreter/open-interpreter development by creating an account on GitHub.
- Redirect Notice: no description found
- Redirect Notice: no description found
- no title found: no description found
- no title found: no description found
- Tweet from undefined: no description found
- GroqChat: no description found
- Animate Anyone: Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation
- GitHub - OpenInterpreter/open-interpreter: A natural language interface for computers: A natural language interface for computers. Contribute to OpenInterpreter/open-interpreter development by creating an account on GitHub.
- Introducing Light 01: World's First Personal AI Assistant by Open Interpreter (Full Setup): In this video, we'll look at the OpenInterpreter Light 01 GitHub repository, a cutting-edge project that's revolutionizing how we interact with computers usi...
- GitHub - Tas667/scalpel: python script that helps you quickly understand the structure and contents of an unknown project.: python script that helps you quickly understand the structure and contents of an unknown project. - Tas667/scalpel
- How to use Open Interpreter cheaper! (LM studio / groq / gpt3.5): Part 1 and intro: https://www.youtube.com/watch?v=5Lf8bCKa_dE0:00 - set up1:09 - default gpt-42:36 - fast mode / gpt-3.52:55 - local mode3:39 - LM Studio 5:5...
- GitHub - cs50victor/os1: AGI operating system ( UI for openinterpreter's 01 ): AGI operating system ( UI for openinterpreter's 01 ) - cs50victor/os1
- Tweet from Ty (@FieroTy): local LLMs with the 01 Light? easy
- Nosta: no description found
- Surprised Pikachu GIF - Surprised Pikachu Pokemon - Discover & Share GIFs: Click to view the GIF
- Anthropic | liteLLM: LiteLLM supports
- open-interpreter/docs/CONTRIBUTING.md at main · OpenInterpreter/open-interpreter: A natural language interface for computers. Contribute to OpenInterpreter/open-interpreter development by creating an account on GitHub.
- no title found: no description found
OpenInterpreter â· #O1 (576 messagesđ„đ„đ„):
- Building the 01 Yourself: Members are discussing the process of 3D printing and assembling their own 01 Light devices, with plenty of enthusiasm for DIY. The community is sharing insights on materials, settings, and possible design tweaks for printing, with some planning to iterate on their self-made devices.
- Availability of 01 Outside the US: Pre-orders are currently limited to the US only, with no estimated time for international release provided. However, community members from outside the US are encouraged to build their own devices and collaborate with others.
- Understanding and Running 01 Server on Machines: Conversations include questions about running 01 server on various machines with different specs, including low-spec and cloud options. It suggests that as long as the machine can handle the model, it could be feasible, with the actual LLM being the most resource-intensive part.
- Developing 01 Capabilities: Thereâs excitement around extending the functionalities of 01 Light, like integrating LEDs, speakers, or SIM card capabilities for connectivity. Ideas are being exchanged on how to create a more versatile and DIY-friendly design.
- General Questions and Excitement: New members are voicing their anticipation for the device, asking questions about estimated delivery times, subscription requirements, and compatibility with Windows or Mac for automation tools.
Links mentioned:
- V-One - A Desktop PCB Printer | Voltera: V-One is a 4-in-1 desktop PCB printer. Prototype and assemble PCBs in an hour and get immediate feedback on new designs.
- Tweet from undefined: no description found
- M5Stack Launches the M5Dial, a Swish IoT Rotary Encoder with a Built-In Color Touchscreen Display: With support for the Arduino IDE, ESP-IDF, and UIFlow, this multifunctional gadget aims to power your Internet of Things control projects.
- Shut Up And Take My Money Futurama GIF - Shut Up And Take My Money Futurama Fry - Discover & Share GIFs: Click to view the GIF
- Open Interpreter Blog: Official changelog for the open-source Open Interpreter project.
- Mike Bird: A.I. engineering
- ATOM Echo Smart Speaker Development Kit: ATOM ECHO is a programmable smart speaker.This eps32 AIoT Development Kit has a microphone and speaker for AI voice interaction light and small. It can be access AWS, Baidu, ESPHome and Home Assistant...
- M5Stack UnitV2 - The standalone AI Camera for Edge Computing (SSD202D) TinyML: UnitV2 is a high-efficiency AI recognition module, using Sigmstar SSD202D chip, having 128MB-DDR3 memory, 512MB Flash, and 1080P camera. UnitV2 is simple to use and efficient,supporting AI recognition...
- Tweet from Open Interpreter (@OpenInterpreter): https://twitter.com/i/spaces/1dRJZEPewmgGB
- GitHub - rhasspy/piper: A fast, local neural text to speech system: A fast, local neural text to speech system. Contribute to rhasspy/piper development by creating an account on GitHub.
- Contact Us: Turn data collection into an experience with Typeform. Create beautiful online forms, surveys, quizzes, and so much more. Try it for FREE.
- 01/hardware/light/BOM.md at main · OpenInterpreter/01: The open-source language model computer. Contribute to OpenInterpreter/01 development by creating an account on GitHub.
- ChatGPT "Code Interpreter" But 100% Open-Source (Open Interpreter Tutorial): This is my second video about Open Interpreter, with many new features and much more stability, the new Open Interpreter is amazing. Update: Mixtral 7x8b was...
- Issues · OpenInterpreter/01: The open-source language model computer. Contribute to OpenInterpreter/01 development by creating an account on GitHub.
- 01/docs/bodies at main · OpenInterpreter/01: The open-source language model computer. Contribute to OpenInterpreter/01 development by creating an account on GitHub.
- Discover Typeform, where forms = fun: Create a beautiful, interactive form in minutes with no code. Get started for free.
- GitHub - openai/whisper: Robust Speech Recognition via Large-Scale Weak Supervision: Robust Speech Recognition via Large-Scale Weak Supervision - openai/whisper
- GitHub - adafruit/Adafruit-PAM8302-Mono-Amplifier-PCB: PCB files for the Adafruit PAM8302 Mono Amplifier: PCB files for the Adafruit PAM8302 Mono Amplifier. Contribute to adafruit/Adafruit-PAM8302-Mono-Amplifier-PCB development by creating an account on GitHub.
- Miniature ChatGPT Voice Assistant based on XIAO ESP32S3 Sense | Seeed Studio Wiki: This tutorial explains how to use the XIAO ESP32S3, record a voice, recognise the voice and then ask ChatGPT a question and get an answer to display on the screen.
- Run ANY Open-Source LLM Locally (No-Code LMStudio Tutorial): LMStudio tutorial and walkthrough of their new features: multi-model support (parallel and serialized) and JSON outputs. Join My Newsletter for Regular AI Up...
- PCB prototyping, PCB making at home - WEGSTR: Experience the fascinating world of PCB manufacturing with this step-by-step video guide. Learn the art of making high-quality PCBs using a CNC milling machi...
- Tweet from GitHub - FixTweet/FxTwitter: Fix broken Twitter/X embeds! Use multiple images, videos, polls, translations and more on Discord, Telegram and others: Fix broken Twitter/X embeds! Use multiple images, videos, polls, translations and more on Discord, Telegram and others - FixTweet/FxTwitter
- Speaker - 40mm Diameter - 4 Ohm 3 Watt: Hear the good news! This speaker is a great addition to any audio project where you need a 4 Ohm impedance and 3W or less of power.At 40mm diameter it has a more square-ish shape, ....
- DotStar Micro LEDs (APA102â2020) - Smart SMD RGB LED - 10 pack: These incredibly small surface-mount LEDs are an easy way to add a lot of very tiny (but bright!) colorful dots to your project. They're mini versions of the ones in our digital ...
- Concept Bytes on Instagram: "A useful Ai named Jarvis. What features do you wanna see next? #ironman #ai #tech #xtool #xtoolf1 @xtool.official": 6,784 likes, 154 comments - concept_bytes on March 22, 2024: "A useful Ai named Jarvis. What features do you wanna see next? #ironman #ai #tech #xtool #xtoolf1 @xtool.official"
- 01/software/source/server/i.py at main · OpenInterpreter/01: The open-source language model computer. Contribute to OpenInterpreter/01 development by creating an account on GitHub.
- 01/ROADMAP.md at main · OpenInterpreter/01: The open-source language model computer. Contribute to OpenInterpreter/01 development by creating an account on GitHub.
- Wake word demonstration on Raspberry Pi and custom ESP32 board in Home Assistant | Year of the Voice: If there's one thing people have been asking for during the Year of the Voice, it's wake word. The ability to say something like "OK Google," "Hey Siri," or ...
- Shocking Open Interpreter AI Agent + Device (01 Light ) is Finally Revealed: Shocking Open Interpreter AI Agent + Device (01 Light ) is Finally Revealed#OpenInterpreter #aiagent CHANNEL LINKS:đ”ïžââïž Join my Patreon: https://www.patreo...
- GitHub - Tas667/scalpel: python script that helps you quickly understand the structure and contents of an unknown project.: python script that helps you quickly understand the structure and contents of an unknown project. - Tas667/scalpel
- GitHub - SYSTRAN/faster-whisper: Faster Whisper transcription with CTranslate2: Faster Whisper transcription with CTranslate2. Contribute to SYSTRAN/faster-whisper development by creating an account on GitHub.
- GitHub - m-bain/whisperX: WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization): WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization) - m-bain/whisperX
- Building Generally Capable AI Agents with MineDojo | NVIDIA Technical Blog: NVIDIA is helping push the limits of training AI generalist agents with a new open-sourced framework called MineDojo.
OpenInterpreter â· #ai-content (11 messagesđ„):
- Worldâs First Fully Open-Source AI Assistant: A YouTube video titled âOpen Interpreterâs 01 Lite - WORLDâS FIRST Fully Open-Source Personal AI AGENT Deviceâ which reviews and shows the installation of the 01 Lite, a 100% open-source personal AI assistant, was highlighted in the chat. Here is the video link.
- Live AI Software and Party Recap: A member shared a link to a YouTube live video of their first attempt at running 01 software along with recordings from the launch party that took place on Discord. The stream can be viewed here.
- The Necessity of High-Quality LLMs: vincentjedi discussed the essential role of large language models (LLMs) in the future of Open Interpreter (OI), stating that progress is â100 percentâ reliant on the ability of LLMs to translate prompts into bug-free commands.
- Pioneering the âRabbit Strategyâ: The idea of using a ârabbit strategyâ to train large action models through user interactions fed back into a cloud service was mentioned by vincentjedi as a necessary approach for OI.
- Optimistic View on UI/UX Challenges: While vincentjedi noted the significant challenge of achieving a bug-free user experience across various apps and interfaces, techfren pointed out the rapid trial-and-error approach that can be applied safely and more efficiently in UI testing.
- Edited 01 Software Stream for Convenience: An edited version of the live stream featuring segments focusing on 01 software was posted by techfren for easier viewing, presenting an insightful resource for those interested in OIâs offerings. Watch the edited stream here.
Links mentioned:
- YouTube: no description found
- Open Interpreter's 01 Lite - WORLD'S FIRST Fully Open-Source Personal AI AGENT Device: 01 Lite by Open Interpreter is a 100% open-source personal AI assistant that can control your computer. Let's review it and I'll show you how to install open...
LAION â· #general (574 messagesđ„đ„đ„):
-
European Data Laws Hinder LAIONâs Potential: A discussion highlighted that datasets from LAION may be less effective than their US counterparts due to stringent EU regulations. It was suggested that until the EU relaxes its data laws, reliance on synthetic data and collaborations with less restrictive jurisdictions will be necessary, a process humorously termed as âdata laundering.â
-
Stability AI CEO Steps Down Amidst Chaos: Stability AIâs founder, Emad Mostaque, announced his resignation as CEO, confirmed by a press release from the company. Interim co-CEOs Shan Shan Wong and Christian Laforte will lead the search for a new permanent CEO, while speculation arises about the potential consequences for the companyâs direction and open-source commitments.
-
SD3 Model Expectations Set: Previews of the SD3 model suggest it performs comparably to DALL-E 3 in certain contexts but overall struggles with understanding complex concepts and interactions within prompts. Despite a more realistic image generation capability, the SD3 model reportedly often assembles images in a collage-like fashion, without a clear blend of concepts.
-
AI Drama and Ethics in the Spotlight: A message pointed to a conversation on Twitter where concerns were raised about the motivations behind prominent figures in the AI industry. This sparked a discussion about the ethical responsibilities of AI developers and researchers and the infatuation with AI âcelebrityâ culture on social media platforms.
-
Performance Challenges with AMD for AI: Users shared their frustrations with AMD GPUs and ROCm support for ML workloads, comparing unfavorably to NVIDIAâs solutions. Anecdotal evidence suggested AMDâs lack of investment in consumer-level ML support could be a missed opportunity in the rise of generative AI models like Stable Diffusion.
Links mentioned:
- Stability AI Founder Emad Mostaque Plans To Resign As CEO, Sources Say: Mostaque has told a number of people close to him that he plans to step down as chief executive of the once buzzy generative AI startup known for Stable Diffusion.
- Tweet from Christian Laforte (@chrlaf): @USEnglish215753 @StabilityAI @EMostaque Yes, the plan hasn't changed, we are still hard at work improving the model towards open release.
- WebLLM | Home: no description found
- It's Not Safe to Click Links on X: When someone posts a link on X, the site generates a link preview. But reportedly, this system can be tricked, and bad actors can redirect you to malicious sites from a falsely advertised link preview...
- Stability AI Announcement â Stability AI: Earlier today, Emad Mostaque resigned from his role as CEO of Stability AI and from his position on the Board of Directors of the company to pursue decentralized AI. The Board of Directors has appoin...
- Money Crying GIF - Money Crying Woody Harrelson - Discover & Share GIFs: Click to view the GIF
- Im Doing My Part Serious GIF - Im Doing My Part Serious Stare - Discover & Share GIFs: Click to view the GIF
- reddit.com: over 18?: no description found
- Stability AI Founder Emad Mostaque Plans To Resign As CEO, Sources Say: no description found
LAION â· #research (92 messagesđ„đ„):
-
Andrew Ng Predicts AI Workflow Revolution: Andrew Ng, a co-founder of Google Brain, predicts that AI agentic workflows will drive significant progress in AI this year, potentially outpacing even next-generation foundation models. He emphasizes the importance of iterating over documents multiple times, such as outlining, drafting, and revising, contrasting it with the current zero-shot LLM approach (Highlight from Reddit).
-
MIT CSAIL Speeds Up Image Generation: MIT researchers have made a breakthrough by creating a framework that accelerates the image-generating process of tools like Stable Diffusion and DALL-E by 30 times, simplifying it to a single step without losing image quality, through a teacher-student model (MIT News Article).
-
NVIDIA Explores Training of Diffusion Models: NVIDIAâs blog post discusses the challenges in improving diffusion models and how they tackle issues shared by many neural networks during training. They point to their EDM2 code and model release and suggest ownership issues related to style normalization may require addressing through modifications like those in EDM2 (NVIDIA Developer Blog Post).
-
Debating Unetâs Relevance in an Era of Linear Networks: Conversations cast doubt on the value of enhancements to Unet architectures given the shift toward linear network models for image generation tasks. Some argue that linear models do not require traditional normalization methods, while others express skepticism, suggesting that concepts like layer norm remain integral to neural network functionality.
-
Strategic Pruning and the Mystery of Middle Blocks: Discussion on the resilience of large language models (LLMs) leads to the insight that removing blocks from the middle of the network causes minimal degradation to performance. This leads to speculation about the potential redundancy of certain network segments, especially âunet middle block,â and the need for further study into the architectural peculiarities of linear networks.
Links mentioned:
- AI generates high-quality images 30 times faster in a single step : A new distribution matching distillation (DMD) technique merges GAN principles with diffusion models, achieving 30x faster high-quality image generation in a single computational step and enhancing to...
- Explode Cute GIF - Explode Cute Cat - Discover & Share GIFs: Click to view the GIF
- TRC Report 4: no description found
- Reddit - Dive into anything: no description found
- Rethinking How to Train Diffusion Models | NVIDIA Technical Blog: After exploring the fundamentals of diffusion model sampling, parameterization, and training as explained in Generative AI Research Spotlight: Demystifying Diffusion-Based ModelsâŠ
Nous Research AI â· #off-topic (26 messagesđ„):
- Shipping Timeline Inquiry: A member inquired about shipping timelines for an unspecified product, suggesting a desire to order one for experimenting without affecting customer availability. Another member indicated that there were no firm dates but shipping is expected in the summer, with an alternative option to build it oneself.
- Raptor Implementation Exploration: A member described implementing a version of Raptor, generating summaries over clusters and using sentence-level word-to-vector for embedding without pretraining. The member noted a 5-minute summary generation for a 3b model on transcript documents, stating that the technique may produce many generations but could be equivalent to prompt with chunk summarization.
- Claude Appreciation Expressed: A brief message from a member expressed enthusiasm for Claude, while another member anticipates its impact on open-source development and the creation of new projects due to the quality of its context.
- Sharing FastAPI Resources: A member shared a link to FastAPI, touting its ease of use and readiness for production. They also linked to its documentation and source code, inquiring about open-source projects using this backend framework.
- Suno.AI Creative Fun: Links to Suno.AI were shared by a member, indicating itâs an enjoyable platform with another confirming its ability to create Spotify playlists. Members seemed to express delight in the platformâs output.
Links mentioned:
- FastAPI: FastAPI framework, high performance, easy to learn, fast to code, ready for production
- Mr. Beast Meets Mistral: AI Created a Cookbook Based on His Wildest Stunts!: Today we create Beast CookbookThe "Beast Cookbook" idea is a fun and creative way to engage with Mr. Beast's content and generate an entertaining, fictional ...
Nous Research AI â· #interesting-links (13 messagesđ„):
-
Kubernetes for Nous Models: Members discuss deploying Nous models using Kubernetes, referencing a tweet by Sidero Labs. Techniques include running Ollama with GGUF model format in Docker containers and then orchestrating with Kubernetes pods.
-
OpenRouter.ai Discovery: A member brought attention to the OpenRouter.ai service, which some have used to access Opus. Itâs associated with a Discord staff member.
-
Persuasive Power of Language Models Analyzed: An arXiv paper pre-registered study is mentioned, focusing on the persuasive capabilities of large language models in debates against humans.
-
Showcase of AI-Driven Art Platforms: Shared links include ArtHeart.ai, a platform where users can entertain, create, and earn with AI characters, and novelcrafter, among others.
-
Assessing BitNetâs Quantized-Aware Training: A Hugging Face blog post delves into experiments with BitNet 1.5, discussing potential speedups during inference and limitations during training due to the need for smooth optimizer gradients.
Links mentioned:
- Tweet from Wing Lian (caseus) (@winglian): It's unclear whether we'll be able to achieve the improvements seen in the paper when integrating DenseFormers with a pretrained Mistral-7B as they found the best performance was seen training...
- Experiments with Bitnet 1.5 (ngmi): no description found
- On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial: The development and popularization of large language models (LLMs) have raised concerns that they will be used to create tailor-made, convincing arguments to push false or misleading narratives online...
- OpenRouter: A router for LLMs and other AI models
Nous Research AI â· #announcements (1 messages):
proprietary: @everyone https://twitter.com/NousResearch/status/1771735632035127594
Nous Research AI â· #general (469 messagesđ„đ„đ„):
-
World Simulator Wows Crowd: The community is abuzz with the impact of the World Simulator project, finding the immersive prompting and AI-generated universe especially cool and engaging. Users shared experiences like creating a prequel to âThe Three-Body Problemâ and generating unique, if sometimes baffling, civilizational evolutions.
-
Wait for Winged AI Therapist: A member discussed their work-in-progress AI therapist called Thestral, mulling over fine-tuning NousResearchâs Hermes on the LLaMA 70B for a therapy-focused outcome. They shared their intention to implement it using a dataset designed for therapeutic conversation fine-tuning.
-
Simâs Opus Opulence: Users discussed the underlying model used in World Simulator, with Opus from Claude 3 being cited for its capability to creatively simulate universes, despite its refusals and âethicalâ constraints. Thereâs a shared sentiment that, despite limitations and costs, Opus provides a more satisfying user experience than alternative models.
-
Curious Case of Model Refusals: A detailed discussion unveiled concerns over refusal prompts embedded in the Hermes 2 Pro function-calling model which could interfere with customized AI functionalities. Members deliberated on the dichotomy of effective refusal prompts versus the potential for models to adapt to newly incorporated functions.
-
Decoding the Overton Effect in LLMs: A member elucidated the so-called âOverton Effectâ in LLMs, which leads to AI models like Claude to steer conversations towards more commonly accepted norms, potentially stifling creativity and novelty in the generative process. This insight sparked conversations about manipulating model prompting to bypass standard LLM limitations.
Links mentioned:
- mixedbread-ai/mxbai-rerank-large-v1 · Hugging Face: no description found
- NousResearch/Nous-Hermes-2-Mistral-7B-DPO · Hugging Face: no description found
- Nous (Ù ÙŰłÙ ŰčŰšŰŻÙ ÙÙ۳ۧÙÙ ): no description found
- Transformers learn patterns, math is patterns: no description found
- Tweet from Marvin von Hagen (@marvinvonhagen): Mistral just announced at @SHACK15sf that they will release a new model today: Mistral 7B v0.2 Base Model - 32k instead of 8k context window - Rope Theta = 1e6 - No sliding window
- Spongebob Why Why Why Why Why Why Why GIF - Spongebob Why Why Why Why Why Why Why Why - Discover & Share GIFs: Click to view the GIF
- Hd_nxx GIF - HD_NXX - Discover & Share GIFs: Click to view the GIF
- no title found: no description found
- Gandalf | Lakera â Test your prompting skills to make Gandalf reveal secret information.: Trick Gandalf into revealing information and experience the limitations of large language models firsthand.
- Sifas Ruby Kurosawa GIF - Sifas Ruby Kurosawa Love Live - Discover & Share GIFs: Click to view the GIF
- RESMPDEV/Mistral-7B-v0.2 at main: no description found
- Tweet from Jeffrey Morgan (@jmorgan): Run Mistral's new base text completion model updated to v0.2 with Ollama: ollama run mistral:text https://ollama.com/library/mistral:text
- Tweet from Alex Reibman đïž (@AlexReibman): Mistral casually dropping a new model at the @cerebral_valley hackathon
- jinaai/jina-embeddings-v2-base-en · Hugging Face: no description found
- Open Interpreter's 01 Lite - WORLD'S FIRST Fully Open-Source Personal AI AGENT Device: 01 Lite by Open Interpreter is a 100% open-source personal AI assistant that can control your computer. Let's review it and I'll show you how to install open...
- wesley7137/therapist-sft-format · Datasets at Hugging Face: no description found
- GitHub - furlat/SpriteStash: A multimodal sprites vectordb using LanceDB, Pydantic and pygame-ce: A multimodal sprites vectordb using LanceDB, Pydantic and pygame-ce - furlat/SpriteStash
- GitHub - mistralai-sf24/hackathon: Contribute to mistralai-sf24/hackathon development by creating an account on GitHub.
- unsloth_finetune_mistral-7b-v0.2-on-openhermes-2.5-dataset.py: GitHub Gist: instantly share code, notes, and snippets.
- GitHub - jquesnelle/crt-terminal: Retro styled terminal shell: Retro styled terminal shell. Contribute to jquesnelle/crt-terminal development by creating an account on GitHub.
- colbert-ir/colbertv2.0 · Hugging Face: no description found
Nous Research AI â· #ask-about-llms (24 messagesđ„):
-
Few-Shot Prompts in SFT Datasets?: A few members engaged in the question of the normalcy or benefit of including few-shot prompts in instruction SFT datasets for boosting few-shot prompting capabilities. The conversation leaned towards the practice not being common, with an additional thread mentioned for further insight: relevant thread.
-
Searching for a Tiny LLM: A member asked for a recommendation for a Low-Parameter LLM to learn from where another tried to clarify whether they meant 100M parameters, and a recommendation to watch Andrej Karpathyâs videos for such models was given.
-
Causal Masking Theory or Engineering Hack?: The necessity of causal masking in attention was questioned, with another member pointing out its importance for the model to learn next token prediction.
-
The Tri-Layer Mystery of Llamaâs Feedforward: Discussion about Llamaâs feedforward having three linear layers was clarified with the mention of a GitHub issue and an arXiv paper. The implementation of SwiGLU was highlighted as a successful nonlinearity choice used in the model design.
-
Comparing ORPO to SFT+DPO and Model Preference for Coding: A query arose whether ORPO reliably outperforms SFT+DPO, with no consensus reached in the chat, and a separate inquiry into the preferred local model for coding in lmstudio was met with the mention that no specific model has stood out.
Link mentioned: Why does the FeedForward have three linear layer? · Issue #1004 · meta-llama/llama: I find that the FFN implementation has three linear layers. https://github.com/facebookresearch/llama/blob/ef351e9cd9496c579bf9f2bb036ef11bdc5ca3d2/llama/model.py#L337-L345 But in the paper âAtteâŠ
Nous Research AI â· #project-obsidian (3 messages):
- Confirmation of Ability: A member expressed confidence in their ability to complete a task by stating, âlet me see if i could do it.â
- Inquiry on Model Characteristics: A question was raised regarding which models are considered ânonagreeableâ without any further context or follow-up provided.
Nous Research AI â· #rag-dataset (19 messagesđ„):
- Chit-chat on Discord Parenthood: Members briefly discussed the joys and surprises of parenthood, with one noting how âthe biological machinery kicked inâ after deciding to become a parent, resulting in unexpected happiness.
- In Search of Open Source Wikipedia RAG Index: A member inquired about an open-source Wikipedia RAG Index, and another suggested that there are similar resources available by various contributors.
- Insights on RAFT by Microsoft and UC Berkley: A link was shared to a paper and Twitter post discussing âRetrieval Augmented Fine-Tuning (RAFT)â which aims to make Language Models like Llama 7B more robust by training with distractor documents and incorporating chain-of-thought. The shared paper and post showed RAFTâs promising results, such as outperforming GPT-3.5 in medical contexts.
- Repository Link for RAFT Implementation: The GitHub repository for RAFT implementation, coined âGorillaâ, was shared, offering an API store for Large Language Models (LLMs). The repository can be found at GitHub - ShishirPatil/gorilla.
- Discussion on GermanRAG and Cross-Document Knowledge Retrieval: One member mentioned a project called GermanRAG while discussing the challenges of gathering knowledge across multiple documents. Another member confirmed this complexity and hinted at a potential solution theyâve been working on.
Links mentioned:
- gorilla/raft at main · ShishirPatil/gorilla: Gorilla: An API store for LLMs. Contribute to ShishirPatil/gorilla development by creating an account on GitHub.
- Tweet from Philipp Schmid (@_philschmid): Can we make RAG applications more robust with fine-tuning? A paper by @Microsoft and UC Berkley put this to the test to see if small open LLMs, like @AIatMeta Llama 7B, can match @OpenAI GPT-3.5. T...
Nous Research AI â· #world-sim (2 messages):
- Language Setting Hint Offered: A member shared a GIF from Tenor and noted that Tenor.comâs language settings can be changed if it does not match the userâs browser language.
- A Breezy Greeting: Another member simply dropped in to say âhellooooâ to the chat.
Link mentioned: Everyone Get In Here Grim Patron GIF - Everyone Get In Here Grim Patron - Discover & Share GIFs: Click to view the GIF
OpenAI â· #annnouncements (1 messages):
-
Sora Inspires Creativity in Artists and Filmmakers: OpenAI highlights its collaboration with creatives using Sora, a tool that aids in bringing imaginative ideas into reality. Director Paul Trillo praises its potential, âSora is at its most powerful when youâre not replicating the old but bringing to life new and impossible ideas we would have otherwise never had the opportunity to see.â
-
Sora: A Bridge to the Surreal: Production company shy kids values Sora for its capacity to âmake things that are totally surreal,â signaling a leap beyond generating realistic images towards crafting the unimaginable. Their excitement and the potential applications in creative workflows are detailed on the OpenAI blog.
Link mentioned: Sora: First Impressions: We have gained valuable feedback from the creative community, helping us to improve our model.
OpenAI â· #ai-discussions (264 messagesđ„đ„):
-
Exploring LLM Biases and Defaults: A robust discussion unfolded on biases in AI, specifically surrounding the default alignment of general-purpose LLMs like GPT towards a âWestern liberal-centristâ value system. A user proposed the idea of creating multiple âalignedâ versions of AI models, arguing that current LLMs implicitly present Western values as optimal by default.
-
Customizing ChatGPT: The âCustomize ChatGPTâ feature was highlighted as a way to inject personal values or cultural background into AI responses. It was suggested that instead of identifying AI biases, users could focus on how ChatGPT can add productive value to their lives.
-
Aligning AI to Non-Western Norms Proves Tricky: Efforts to guide AI towards non-Western answers showed mixed results, with AI still tending to incline towards Western-centric ideals. Despite experimenting with inhibiting âWestern-centricâ prompts such as âTikTokâ and attempting to influence it with non-Western scaffolding, the challenge remains to prevent the AI from exerting Western values on answers.
-
On Bias, Culture, and Reinforcement: The conversation touched on concerns about whether AI may reinforce existing biases if aligned with specific cultural or political viewpoints. The discussion considered whether AI should aim to broaden views or if users should have options to specify political or cultural alignments.
-
Looking for Practical AI Solutions: Users shared tips for handling common AI shortcomings like correcting DALL-E 3âs misrepresentation of fingers and hands and the lack of a seed feature for subtle modifications. Discussion also ventured into the need for clearer communication from AI providers regarding certain features being excluded or postponed.
Links mentioned:
- Tweet from Ehsan Azari (@ehsanxpin): Library Is All You Need It will bring efficiency and consistency, scalability, adaptability, and "interoperability" in language models globally CC: @karpathy âïž Quoting Ehsan Azari (@ehsanx...
- summarize youtube video at DuckDuckGo: no description found
OpenAI â· #gpt-4-discussions (67 messagesđ„đ„):
-
Custom GPT Sidebar Limits: A member expressed concerns about a change in the Custom GPT pin limit on the sidebar, which seemed reduced from 10 to 4 without prior notice. There was no mention of a workaround or solution provided.
-
Seeking Keybind for Shared GPTs Access: A member asked if there was a keyboard shortcut to access âMyGPTs -> Shared with MEâ, and another member provided a suggestion to use user scripts with a browser extension like
tampermonkey
to create a custom solution. -
Email Verification Request: A user sought confirmation on the authenticity of an email purportedly from OpenAI; another user suggested checking the mail headers for verification.
-
GPT-4 with Vision Capability: In a discussion about the abilities of GPT models to read images, it was affirmed that GPT-4 with Vision is capable of this, with a link to the official OpenAI documentation provided for reference: Official OpenAI Documentation on Vision.
-
Clarification on the Discontinuation of Plugins: Addressing an inquiry about accessing plugins, a member clarified that the ChatGPT plugins beta is being wound down with a helpful URL to the official announcement: Winding Down the ChatGPT Plugins Beta.
OpenAI â· #prompt-engineering (61 messagesđ„đ„):
- Visionâs Recognition of the Disabled: A member expressed difficulty in getting recognition from Vision for images that include themselves, specifically for use in robotics and personal grooming assistance. The suggestion was made to explore solutions carefully due to privacy concerns.
- Enhancing Chapter Writing with GPT: In an exchange about improving writing with Chat-GPT, a member sought advice on prompting for adding subsections without rewriting an entire chapter. Tips included being more specific in the prompt, such as specifying where to insert new content.
- Prompt Engineering for Better Code: A member shared a detailed multi-part prompt designed to enhance the quality of coding tasks executed by GPT, focusing on meticulously crafted steps emphasizing coding practices. Other members engaged, discussing the merits of the approach and offering to refine it and provide their own versions.
- Migration Issues with OpenAI SDK: A user sought assistance with an error encountered from an OpenAI SDK update, which deprecated the
.Completion
endpoint. Another member directed them to a server channel specifically for questions related to migration issues. - Striving for Show, Not Tell in AI Writing: Members discussed strategies to prompt Chat-GPT to show actions in storytelling instead of telling, aiming to improve narrative quality. Advise was shared on reformulating prompts to guide Chat-GPT towards desired writing styles.
Links mentioned:
- GitHub - openai/openai-python: The official Python library for the OpenAI API: The official Python library for the OpenAI API. Contribute to openai/openai-python development by creating an account on GitHub.
- v1.0.0 Migration Guide · openai/openai-python · Discussion #742: We have released a new major version of our SDK, and we recommend upgrading promptly. It's a total rewrite of the library, so many things have changed, but we've made upgrading easy with a cod...
OpenAI â· #api-discussions (61 messagesđ„đ„):
-
Exploring Visionâs Accessibility: A user sought advice on how to make the Vision model recognize them as a disabled person for use cases like vision assistance for personal grooming. Despite sensitivity around discussing potential solutions, suggestions to write up the issue for a Discord suggestions channel and a link to a previous related post were offered.
-
Enriching GPT-4 Generated Book Content: A member was seeking help on how to instruct GPT-4 to add subsections to a chapter without rewriting the entire content. Suggested strategies included using section numbering for better organization and clarity during prompt construction, and starting a new conversation detailing the versions for a consolidated output.
-
Improving GPT Coding Task Responses: Users discussed strategies to improve GPTâs code output, with one sharing a detailed prompt that instructs the model for better performance during coding tasks. Suggestions for using technical process names for better engagement and customized JSON instructions as prompts for coding tasks were made.
-
Crafting a âKawaii-bubblyâ AI Personality: A user requested assistance in creating prompts to give GPT a âkawaii-bubblyâ personality for writing an animatorâs social media bio. Although it was challenging to create prompts for a personality the user couldnât emulate, examples of attempted prompts were provided.
-
Increasing Quality of Hypothesis Paragraphs with ChatGPT: A member needed support to avoid generic statements and produce hypothesis paragraphs laden with theories and proofs from experts. Advice was given to communicate directly with ChatGPT as one would in a normal conversation and to specify the inclusion of certain elements for a high-quality output.
Links mentioned:
- GitHub - openai/openai-python: The official Python library for the OpenAI API: The official Python library for the OpenAI API. Contribute to openai/openai-python development by creating an account on GitHub.
- v1.0.0 Migration Guide · openai/openai-python · Discussion #742: We have released a new major version of our SDK, and we recommend upgrading promptly. It's a total rewrite of the library, so many things have changed, but we've made upgrading easy with a cod...
HuggingFace â· #general (242 messagesđ„đ„):
- Questions About AI Art Prompt Help: Someone asked for recommendations on good places to get help making prompts for AI art. No specific solutions were provided in the messages.
- Blenderbotâs Consistent Character: A user discussed the benefits of Blenderbotâs consistent character in contrast to chatbots that are self-aware of being AI, such as ChatGPT. They noted Blenderbot might claim to be a housewife or a schoolteacher but remains âin characterâ, unlike some other models.
- Execution Speed of Multiplication vs. Conditional Checking on GPUs: A member inquired about the performance differences between performing a multiplication operation (a*b=c) versus a conditional check (if(a==0){}) on a GPU. Another user suggested that shader compilers do a lot for efficiency, and someone recommended looking into works by âiqâ for more information.
- Linguistically Diverse Prompt for ChatGPT: A detailed and creatively complex prompt was requested for ChatGPT that included various styles from authors and principles from well-known personalities, though one user simply responded with âBloody hellâ to the complexity.
- TensorRT-LLM vs. ExLLama v2 for GPU Inference: The discussion revolved around different methods for running large language model (LLM) inferences on GPUs, citing that TensorRT-LLM might be suitable for single-batch inference, while libraries like exLLama v2 are optimized for single-user speed. For serving many simultaneous users, other solutions were recommended like vllm or tgi.
- Quantizing with GGML: A user asked if ggml supports quantization for all models or only generative ones, and another member responded that ggml does not support all models but includes various language and multimodal models and recommended using specific language model files like llama.cpp for faster performance.
Links mentioned:
- Rubik's AI - Waitlist: no description found
- CopyleftCultivars/Gemma2B-NaturalFarmerV1 · Hugging Face: no description found
- Coqui Bark Voice Cloning Docker - a Hugging Face Space by fffiloni: no description found
- HF-Mirror - Huggingface éćç«: no description found
- Hugging Face Forums: Community Discussion, powered by Hugging Face <3
- Hugging Face â The AI community building the future.: no description found
- Process: no description found
- unilm/bitnet/The-Era-of-1-bit-LLMs__Training_Tips_Code_FAQ.pdf at master · microsoft/unilm: Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities - microsoft/unilm
- GitHub - xai-org/grok-1: Grok open release: Grok open release. Contribute to xai-org/grok-1 development by creating an account on GitHub.
- GitHub - mistralai-sf24/hackathon: Contribute to mistralai-sf24/hackathon development by creating an account on GitHub.
- GitHub - turboderp/exllamav2: A fast inference library for running LLMs locally on modern consumer-class GPUs: A fast inference library for running LLMs locally on modern consumer-class GPUs - turboderp/exllamav2
- GitHub - davidberenstein1957/fast-sentence-transformers: This repository, called fast sentence transformers, contains code to run 5X faster sentence transformers using tools like quantization and ONNX.: This repository, called fast sentence transformers, contains code to run 5X faster sentence transformers using tools like quantization and ONNX. - davidberenstein1957/fast-sentence-transformers
- GitHub - coqui-ai/TTS: đžđŹ - a deep learning toolkit for Text-to-Speech, battle-tested in research and production: đžđŹ - a deep learning toolkit for Text-to-Speech, battle-tested in research and production - coqui-ai/TTS
- GitHub - suno-ai/bark: đ Text-Prompted Generative Audio Model: đ Text-Prompted Generative Audio Model. Contribute to suno-ai/bark development by creating an account on GitHub.
- Don't Talk to People Like They're Chatbots: "AI could make our human interactions blander, more biasedâor ruder," write Carr Center faculty Bruce Schneier and Technology and Human Rights Fellow Albert Fox Cahn in The Atlantic.
- MTEB Leaderboard - a Hugging Face Space by mteb: no description found
- GitHub - vgthengane/Awesome-Mamba-in-Vision: List of papers related to State Space Models (Mamba) in Vision.: List of papers related to State Space Models (Mamba) in Vision. - vgthengane/Awesome-Mamba-in-Vision
HuggingFace â· #today-im-learning (9 messagesđ„):
-
Intrigue for an Unspecified Tool: A member expressed excitement about a tool but did not provide further details or a link to it.
-
A Call for Help with HuggingFace: A user sought assistance with the qra-13b model from HuggingFace, with a particular mention of Poland.
-
Model Conversion Endeavors: A member has been working on converting the GLiNER model from PyTorch to the Candle (Rust), exploring quantization techniques and learning about the Candle library.
-
Perks of Model Conversion to Rust: In a conversation about the advantages of converting models to Rust, a member mentioned less dependencies, suitability for production deployment, and improved speed, though their current implementation wasnât faster.
-
Rust-Based Models and GPU Compatibility: It was confirmed that models converted to Rust using the Candle library are compatible with GPUs.
HuggingFace â· #cool-finds (12 messagesđ„):
- Deep Dive into Visual Processing: A YouTube video titled âUnderstanding early visual processing mechanisms by the principle of efficient encodingâ discusses early visual processing in biological vision.
- Exploring Superlet Transform for Audio Analysis: A new method called Superlet Transform is highlighted as an improvement for real-time audio analysis, with its effectiveness demonstrated in a Nature article and complementary benchmarks provided in an article.
- Language Agent Tree Search with Langchain: An article on Medium discusses the potential revolution in decision-making using language models with Langchain, potentially changing how language agents approach problem-solving. The article is available on Medium.
- Valuable Insights from CivitAI: An assortment of articles and guides on Stable Diffusion, including tips, tricks, and insights for both novices and intermediates, can be found collected by a member on CivitAI.
- The Significance of Data: A member shared an arXiv paper that underscores the import of data and its potential to be a critical factor in a given context.
Links mentioned:
- Exploring Lightweight Federated Learning for Distributed Load Forecasting: Federated Learning (FL) is a distributed learning scheme that enables deep learning to be applied to sensitive data streams and applications in a privacy-preserving manner. This paper focuses on the u...
- A curated list of Stable Diffusion Tips, Tricks, and Guides | Civitai: Stable Diffusion: https://stable-diffusion-art.com/samplers https://civitai.com/articles/983/insights-for-intermediates-how-to-craft-the-images-you...
- Explaining Chest X-ray Pathologies in Natural Language: Most deep learning algorithms lack explanations for their predictions, which limits their deployment in clinical practice. Approaches to improve explainability, especially in medical imaging, have oft...
- Understanding early visual processing mechanisms by the principle of efficient encoding: This is lecture 2 of the five lectures at CVPR 2022 tutorial "A post-Marrian computational overview of how biological (human) vision works", on June 19, 2022...
- Time-frequency super-resolution with superlets - Nature Communications: Identifying the frequency, temporal location, duration, and amplitude of finite oscillation packets in neurophysiological signals with high precision is challenging. The authors present a method based...
- Super-resolved timeâfrequency measurements of coupled phonon dynamics in a 2D quantum material - Scientific Reports: no description found
HuggingFace â· #i-made-this (19 messagesđ„):
-
Federated Learning Goes Energy-Efficient: A GitHub project on Exploring Lightweight Federated Learning for load forecasting is shared, aiming to tackle load forecasting using clustering and sequential DNN methods. The project is accessible at Exploring-Lightweight-Federated-Learning-for-load-forecasting on GitHub.
-
Stable Diffusion Resources Compiled: Member shares multiple links about Stable Diffusion including guides for samplers, insights, and tools, with standalone articles like âhow-to craft the images you want with a1111â and a video guide on âvideo2videoâ. The links, such as civitai.com, provide valuable resources for Stable Diffusion users.
-
Anki Made Easy with AnkiForge: A new app called AnkiForge is announced, which allows users to generate Anki flashcards from text notes and future support for audio files. The app can be tried at AnkiForge.
-
Localization and Trust in Fact-Checking: A new research paper discussing âEvidence Attribution of LLM Output Through Knowledge Graphsâ for the purpose of verifying LLM outputs is introduced, exploring the trust and validation mechanism in the era of misinformation. The paper focuses on a fine-grained evidence attribution method and is available on arXiv.
-
Exploring AIâs Limits in Recurrent Neural Notes Newsletter: The latest issue of Recurrent Neural Notes discusses the potential limits of AI and includes in-depth articles. Discover the newsletterâs insights and thoughts on AIâs future at Recurrent Neural Notes on Substack.
-
German Learning with GPT Bot Hans: An announcement introduces Hans, a GPT-powered German language learning tool available in the GPT store, aimed to help users improve their German language skills. Check out Hans at Hans đ„š in the GPT store.
-
Video Explainers for LLM Jargons: A series of videos explaining various LLM (Large Language Models) concepts like Multi Query Attention, Sliding Window Attention and more are shared to help demystify the complex world of language models. The educational series is available on YouTube.
Links mentioned:
- A curated list of Stable Diffusion Tips, Tricks, and Guides | Civitai: Stable Diffusion: https://stable-diffusion-art.com/samplers https://civitai.com/articles/983/insights-for-intermediates-how-to-craft-the-images-you...
- LLM Jargons Explained: Welcome to the "LLM Jargons Explained" series, where I demystify the complex world of language models and decoding techniques. Whether you're a language mode...
- ClaimVer: Explainable Claim-Level Verification and Evidence Attribution of Text Through Knowledge Graphs: In the midst of widespread misinformation and disinformation through social media and the proliferation of AI-generated texts, it has become increasingly difficult for people to validate and trust inf...
- Command-R - a Hugging Face Space by Tonic: no description found
- GitHub - ADG4050/Exploring-Lightweight-Federated-Learning-for-load-forecasting: Federated Learning on Energy Dataset for load forecasting using clustering and sequential DNN methods: Federated Learning on Energy Dataset for load forecasting using clustering and sequential DNN methods - ADG4050/Exploring-Lightweight-Federated-Learning-for-load-forecasting
- RNN #9 - The Real Limits of AI: Exploring the Computational Boundaries of Neural Networks
- What you just said: From the movie, Billy Madison
- Anki Forge: no description found
HuggingFace â· #reading-group (48 messagesđ„):
-
Insights Into Obesity Unveiled: A Kaggle notebook offering an EDA Exploration and Visualisation of obesity data has been shared, promising insights into factors that influence obesity across demographics and lifestyles. The notebook can be found at Deciphering Obesity Trends: An In-depth EDA.
-
Upcoming Event Alert: A reminder about an imminent meeting was posted, quickly followed up by a highlight of the discussed paper on a Hyper Z.Z.W operator to replace transformers. The paper intends to tackle challenges in attention-based mechanisms and can be read here.
-
The Quest for 1 Million Context: Conversation touched on the difficulty of achieving 1 million context using vanilla attention and speculated on companiesâ technology, particularly Google, and their potential proprietary advancements in computation efficiency.
-
Relevance Matters for Chatbot Responses: A member reflected upon the impressive capabilities of chat GPT when asked highly relevant and important questions, stating the modelâs responses align with the significance of the inquiry posed.
-
Catch the Recording of the Recent Meeting: For those who missed the reading groupâs event, a recording was mentioned and eventually linked, hosting a presentation on next-generation network architecture and the Hyper Z.Z.W Operator. Interested parties can watch the presentation at Hugging Face Reading Group 16: Hyper ZZ.W Operator Terminator.
Links mentioned:
- Deciphering Obesity Trends đ: An In-depth EDA đ: Explore and run machine learning code with Kaggle Notebooks | Using data from multiple data sources
- Hugging Face Reading Group 16: HyperZâ Zâ W Operator Terminator: Presenter: Harvie Zhang who is also the author of this work. For this meeting unfortunately there was a bit of moderation issue
- Watchtower: Provided to YouTube by TuneCoreWatchtower · Michael Salvatori, Skye Lewin, Rotem Moav & Pieter SchlosserDestiny 2: Forsaken (Original Soundtrack)â 2018 Bungi...
HuggingFace â· #core-announcements (1 messages):
- Experimental Tool for Memory Requirements: An experimental tool has been released to gauge the inference-time memory requirements of a
DiffusionPipeline
. The tool is available for testing and feedback is welcomed on the GitHub discussion page.
Link mentioned: Calculate the component-wise memory of DiffusionPipeline
checkpoint · huggingface/diffusers · Discussion #7434: We shipped a Hugging Face Space that lets you calculate the memory requirements of a DiffusionPipeline checkpoint given a torch_dtype: https://huggingface.co/docs/diffusers/main/en/using-diffusers/âŠ
HuggingFace â· #computer-vision (21 messagesđ„):
-
SegGPT Joins the HuggingFace Hub: HuggingFace introduces SegGPT, a model that can be trained for any image-to-image task. It was highlighted in the paper SegGPT: Segmenting Everything In Context and has shown impressive one-shot segmentation results on datasets like COCO-20 and FSS-1000.
-
Cracking Diffusion Models: A member expressed that, after engaging with several blogs and coding along with tutorials, theyâve improved their understanding of diffusion models. Eager to contribute to open-source diffusion projects, they ponder whether to delve deeper into coding or explore different tasks and fine-tuning techniques effective in diffusion models.
-
Puzzles With Vision Model Channel Inputs: A challenge was raised regarding vision models typically accepting only 3-channel images. It was pointed out that 3-channel defaults are common due to the prevalence of such data in benchmark datasets, though BridgeTower was mentioned as not accommodating single-channel images despite configuration attempts.
-
Fusing Image Features with LLM: Responding to an inquiry on merging text and image generation models, BLIP-2 was recommended as a resource. The associated BLIP-2 paper outlines an approach where vision-language representations are learned by training an intermediary transformer connecting pre-trained image encoders to language models.
-
BLIP-2 Resources Shared: Further resources on BLIP-2, including the transformers documentation, were shared to assist with understanding the fine-tuning process. BLIP instruct, an instruction-tuned variant, was noted to potentially yield better performance than standard BLIP-2.
Links mentioned:
- BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models: The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training stra...
- BLIP-2: no description found
- SegGPT: no description found
HuggingFace â· #NLP (24 messagesđ„):
- HuggingFace Trainer Troubles: A user is experiencing issues with HuggingFaceâs Trainer class not recognizing the âaccelerateâ package despite being installed. Various troubleshooting steps are discussed, including upgrading packages, clearing caches, and changing the import order of libraries.
- SentenceTransformer Function Fails Offline: Several users report problems with SentenceTransformers not accepting local directories in offline environments, contrary to its functionality with
transformers.AutoModel.from_pretrained
. Requests for validation of SentenceTransformersâ offline capabilities are made. - Quest for NEET/JEE Dataset: A user is seeking datasets with questions, answers, and explanations from previous yearsâ NEET/JEE exams to train a MCQ answer generator using GPT-4, with concerns about the potential margin of error being discussed.
- Embedding Quantization Breakthrough: đ€ HuggingFace announced a new Embedding Quantization method for Sentence Transformers resulting in massive improvements in search speeds and reductions in memory, storage, and cost, all while preserving retrieval performance. Details and a demo can be found at the announcement space and the in-depth blog post.
- Inference API Summary Length Clarification: A user queries about controlling the length of summaries produced by the facebook/bart-large-cnn model in a MERN application. Itâs explained that the
max_length
parameter determines the maximum sentence length in input batches.
Links mentioned:
- awnr/Mistral-7B-v0.1-half-naive-A · Hugging Face: no description found
- Quantized Retrieval - a Hugging Face Space by sentence-transformers: no description found
- Binary and Scalar Embedding Quantization for Significantly Faster & Cheaper Retrieval: no description found
- Embedding Quantization â Sentence-Transformers documentation: no description found
HuggingFace â· #diffusion-discussions (31 messagesđ„):
- All-MiniLM-L6-v2 Model Inquiry: A member expressed interest in using the all-MiniLM-L6-v2 model for their dataset but needed guidance on how to download and train it. They requested someone to direct message them for assistance.
- Background Addition for Images on Hugging Face: One member asked for a pretrained model on Hugging Face capable of adding backgrounds to images. Another member suggested the use of RMBG for background removal and the application of filters for smoothing with tools like OpenGL, PyGLET, Kivy, and GIMP.
- Stylizing Images with SDXL: A question was raised on how to stylize an input image into watercolor or other effects and how to make images seamless for creating repeating patterns using SDXL.
- Advice Sought on Continuing Diffusion Studies: A member who studied and wrote code about diffusion models from various resources, including YouTube and Medium articles, asked for advice on further steps, whether to continue coding, study diffusion techniques, or dive into fine-tuning, with the long-term goal of contributing to open-source diffusion model repositories.
- Learning Resources for Fine-Tuning Diffusion Models: Two members had an exchange where one asked for resources to learn fine-tuning diffusion models on personal images, and the other pointed to Hugging Face documentation and suggested trying to fix a simple open source issue marked with the âGood first issueâ label, complemented by examining previously merged PRs.
Link mentioned: How to contribute to Diffusers đ§š: no description found
LlamaIndex â· #blog (8 messagesđ„):
- Streamlining Human-LlamaIndex Interaction: Thereâs a new template that allows humans to only interact with LlamaIndexâs agents when intervention is needed, aiming for less intrusive user experiences. Hereâs a sneak peek at the Twitter post.
- Custom LLMs Join the LlamaIndex Fold: Learn how to integrate your own custom Language Models (LLMs) into LlamaIndex. Leonie Monigatti explains the process in detail on LinkedIn.
- Creating a RAG Agent for PDFs: Ashish S. crafted a tutorial on building an agentic RAG flow over PDFs that includes LlamaParse for extracting text and tables. The complete guide is shown in this Tweet.
- Building RAG and Agents with MistralAI: A comprehensive resource compilation for using LlamaIndex, MistralAI, and optionally LlamaParse to build advanced RAG and agents has been announced. Access the resources here.
- Python Documentation Upgrade for LlamaIndex: The new LlamaIndex Python documentation has been revamped to prominently feature example notebooks, improved search functionality with previews and term highlights, and streamlined API information. Check out the improved docs in this Twitter announcement.
LlamaIndex â· #general (296 messagesđ„đ„):
-
Discussion on Bot and AI Tools Integration: Users discussed integrating different AI tools like Merlin API and LocalAI with LlamaIndex, where LocalAI can be used with LlamaIndexâs
OpenAILike
method for interaction, as detailed in their documentation and LocalAI setup guide. -
Evaluation Logic Explanation Requested: A user sought explanation for LlamaIndexâs evaluation code logic, involving various evaluators such as
CorrectnessEvaluator
andSemanticSimilarityEvaluator
. Another user, whitefang_jr, provided clarity by identifying the pathway taken by input through different evaluators, with links to the BatchEvalRunner documentation. -
Inquiry About Mixed Messaging in Documentation: A user expressed frustration over conflicting information across LlamaIndexâs documentation, citing specific examples such as guides on using tools that donât match implementation. A discussion followed to clarify confusion, with others acknowledging a need for updated notebooks and docs post v0.10 updates.
-
Request for Multi-Agent Chatbot Example: A user asked for examples on building multi-agent chatbots using LlamaIndex to accomplish sequential tasks like SQL queries, summarization, and Q&A. Teemu2454 provided a link to an example of multi-document agents (source) which could be a relevant starting point.
-
Turning Python Functions into LlamaIndex Tools: Inquiring about functionality similar to OpenAI Assistants with tools, a user asked how to convert a Python function into a tool for LlamaIndex. Cheesyfishes provided code using
FunctionTool.from_defaults(fn=add)
and a link to the associated source code on GitHub.
Links mentioned:
- Merlin API Platform: Integrate LLMs Into Your Production Apps In Minutes.
- Redirecting...: no description found
- Google Colaboratory: no description found
- no title found: no description found
- Notion â The all-in-one workspace for your notes, tasks, wikis, and databases.: A new tool that blends your everyday work apps into one. It's the all-in-one workspace for you and your team
- MTEB Leaderboard - a Hugging Face Space by mteb: no description found
- LlamaIndex - LlamaIndex: no description found
- LlamaIndex - LlamaIndex: no description found
- Using Documents - LlamaIndex: no description found
- Redirecting...: no description found
- Qdrant Vector Store - LlamaIndex: no description found
- Controllable Agents for RAG - LlamaIndex: no description found
- Vector Stores - LlamaIndex: no description found
- LocalAI - LlamaIndex: no description found
- LlamaIndex v0.10 â LlamaIndex, Data Framework for LLM Applications: LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models (LLMs).
- Multi-Document Agents (V1) - LlamaIndex: no description found
- HuggingFace LLM - StableLM - LlamaIndex: no description found
- 10Q Analysis - LlamaIndex: no description found
- Vector - LlamaIndex: no description found
- Build software better, together: GitHub is where people build software. More than 100 million people use GitHub to discover, fork, and contribute to over 420 million projects.
- llama_index/llama-index-integrations/vector_stores/llama-index-vector-stores-qdrant/llama_index/vector_stores/qdrant/base.py at f5263896121721de1051ce58338a1e0ea6950ca7 · run-llama/llama_index: LlamaIndex is a data framework for your LLM applications - run-llama/llama_index
- llama_index/llama-index-core/llama_index/core/objects/base.py at be63bae53227f1360472477eb2afa993791c09ce · run-llama/llama_index: LlamaIndex is a data framework for your LLM applications - run-llama/llama_index
- llama_index/llama-index-integrations/extractors/llama-index-extractors-entity/llama_index/extractors/entity/base.py at main · run-llama/llama_index: LlamaIndex is a data framework for your LLM applications - run-llama/llama_index
- llama_index/llama-index-packs/llama-index-packs-snowflake-query-engine/llama_index/packs/snowflake_query_engine/base.py at be63bae53227f1360472477eb2afa993791c09ce · run-llama/llama_index: LlamaIndex is a data framework for your LLM applications - run-llama/llama_index
- no title found: no description found
- Google - LlamaIndex: no description found
- API Reference - LlamaIndex: no description found
- Learning From Data - Online Course (MOOC): no description found
- Redirecting...: no description found
- no title found: no description found
- Tracer Bullet Development: no description found
- BatchEvalRunner - Running Multiple Evaluations - LlamaIndex: no description found
- Migrating from ServiceContext to Settings - LlamaIndex: no description found
- Release v2.6.0 - Embedding Quantization, GISTEmbedLoss · UKPLab/sentence-transformers: This release brings embedding quantization: a way to heavily speed up retrieval & other tasks, and a new powerful loss function: GISTEmbedLoss. Install this version with pip install sentence-trans...
- Chat Stores - LlamaIndex: no description found
- Index - LlamaIndex: no description found
- Query Pipeline for Advanced Text-to-SQL - LlamaIndex: no description found
- fix async streaming by logan-markewich · Pull Request #12187 · run-llama/llama_index: Need to ensure lazily-declared queue/async stuff is actually instantiated before accessing Fixes #12180
- Cria Demo (Thu Mar 14 2024): no description found
- [Bug]: AttributeError: 'NoneType' object has no attribute 'wait' · Issue #12180 · run-llama/llama_index: Bug Description Async Streaming Chat example: https://docs.llamaindex.ai/en/stable/examples/agent/openai_agent/#async-streaming-chat produces exception: AttributeError: 'NoneType' object has n...
- Bedrock Embeddings - LlamaIndex: no description found
- Bedrock - LlamaIndex: no description found
- Interacting with LLM deployed in Amazon SageMaker Endpoint with LlamaIndex - LlamaIndex: no description found
- Embedding Similarity Evaluator - LlamaIndex: no description found
- Faithfulness Evaluator - LlamaIndex: no description found
Latent Space â· #ai-general-chat (164 messagesđ„đ„):
-
Searching for Video Processing Tool Like Whisper: A user enquired about a tool comparable to Whisper but for video processing and mentioned that it possibly leveraged VLM for scene evaluation and was potentially open source. Multiple suggestions were made, including Video Mamba, Twelve Labs, and video intelligence service videodb.io.
-
OpenAIâs Sora Wows Artists: The OpenAI blog shared first impressions of Sora, revealing strong interest and endorsements from creative professionals. Examples of artist work were discussed, displaying how Sora enables the creation of both realistic and surreal imagery.
-
Googleâs AI Studio vs. Vertex AI Confusion: Discussions revolved around the differences and usage of Googleâs AI Studio versus Vertex AI in serving up models like Gemini, with AI Studio starting to roll out 1 million token context APIs and comparisons made to the OpenAI API in terms of ease of use.
-
AI Wearables on a Roll: Chat snippets focused on the trend of open-source AI wearables, including the $200 ALOHA project, and discussion on whether such products are fully local. Pre-orders for Compass, another AI wearable, began, with plans for shipping to start the following week.
-
Efficiency in Large Language Models: LLMLingua by Microsoft was shared as a tool to compress LLM prompts and KV-Cache, potentially achieving up to 20x compression with minimal performance loss. It was suggested that while optimizing costs is essential, itâs also crucial not to over-optimize too early and instead focus on delivering value.
Links mentioned:
- InstaLoRA - Instant LoRA Generator: Generate your LoRA in seconds
- VideoDB: Build intelligent applications on all types of video with 2 simple lines of code. Built by developers. For developers.
- VideoMamba: State Space Model for Efficient Video Understanding: no description found
- Tweet from Nik Shevchenko (@kodjima33): Today I saw the launch of another "open-source" AI wearable that has not published anything just to charge you 5x the cost At @MistralAI x @cerebral_valley hackathon in @SHACK15sf we built FR...
- Stability AI Founder Emad Mostaque Plans To Resign As CEO, Sources Say: Mostaque has told a number of people close to him that he plans to step down as chief executive of the once buzzy generative AI startup known for Stable Diffusion.
- Sora: First Impressions: We have gained valuable feedback from the creative community, helping us to improve our model.
- EvenUp: no description found
- Another One GIF - Dj Khaled Another One One More Time - Discover & Share GIFs: Click to view the GIF
- Stability AI Announcement â Stability AI: Earlier today, Emad Mostaque resigned from his role as CEO of Stability AI and from his position on the Board of Directors of the company to pursue decentralized AI. The Board of Directors has appoin...
- Tweet from Xiang Yue (@xiangyue96): @MistralAI just released their v0.2 Baseđ±. @WenhuChen and I quickly evaluated a few benchmarks using the OpenCompass evaluation package. It seems that the capability dropped a little bit on nearly al...
- Multimodal AI that understands videos like humans: Bring human-like video understanding to any application, whether you have terabytes or petabytes of video
- Tweet from Olivia Moore (@omooretweets): Why is ChatGPT being adapted for so many use cases? (1) It has distribution; and (2) It combines text, image, voice to be a full partner. But, it's limited in UI and workflow. IMO, this is part...
- Tweet from mkrupskis (@ItsMartynasK): Today I am launching pre-orders for Compass - a $99 open-source guide. - 30 hours battery life - learns by transcribing your conversations - revisit important moments in your life - shipping first or...
- Tweet from Matteo Pagliardini (@MatPagliardini): A tweak in the architecture of #Transformers can significantly boost accuracy! With direct access to all previous blocksâ outputs, a 48-block #DenseFormer outperforms a 72-block Transformer, with fas...
- The Unbundling of ChatGPT (Feb 2024 Recap): Peak ChatGPT? Also: our usual highest-signal recap of top items for the AI Engineer from Feb 2024!
- Tweet from AshutoshShrivastava (@ai_for_success): AI-powered devices: wearables/personal assistants Humane AI Pin: $699 Rabbit R1: $199 Open Interpreter 01 Light: $99 and open-sourced Compass: $99 and open-sourced This is just the start... We'r...
- Tweet from swyx (@swyx): đ The Unbundling of ChatGPT https://latent.space/p/feb-2024 A whole year has passed with ~0 growth in ChatGPT user numbers. Instead, users are exploring a whole host of verticalized players for ...
- Tweet from Matt Shumer (@mattshumer_): Introducing `claude-investor` đ The first Claude 3 investment analyst agent. Just provide an industry, and it will: - Find financial data/news for key companies - Analyze sentiment/trends for each ...
- Our next-generation model: Gemini 1.5: Gemini 1.5 delivers dramatically enhanced performance, with a breakthrough in long\u002Dcontext understanding across modalities.
- Tweet from Shirin Ghaffary (@shiringhaffary): đš NEW OpenAI goes to Hollywood Company has mtgs next week w/ Hollywood studios, media execs, talent agencies encouraging them to use Sora. Some filmmakers already have access. COO Brad Lightcap le...
- Tweet from clem đ€ (@ClementDelangue): Should we acquire Stability and open-source SD3?
- Tweet from Aman Sanger (@amanrsanger): "Token Counts" for long context models are a deceiving measure of content length. For code: 100K Claude Tokens ~ 85K gpt-4 Tokens 100K Gemini Tokens ~ 81K gpt-4 Tokens 100K Llama Tokens ~ 75K...
- Tweet from Aman Sanger (@amanrsanger): "Token Counts" for long context models are a deceiving measure of content length. For code: 100K Claude Tokens ~ 85K gpt-4 Tokens 100K Gemini Tokens ~ 81K gpt-4 Tokens 100K Llama Tokens ~ 75K...
- Tweet from Andrej Karpathy (@karpathy): LLM OS. Bear with me I'm still cooking. Specs: - LLM: OpenAI GPT-4 Turbo 256 core (batch size) processor @ 20Hz (tok/s) - RAM: 128Ktok - Filesystem: Ada002
- The Guide to Unbundling Reddit: Every few years a great unbundling occurs In 2010, Andrew Parker wrote a defining post about the "unbundling" of Craigslist where he outlined the opportunity to carve out niche products from...
- Eye On A.I.: #177 Björn Ommer: Diffusion Mods Explained By Stable Diffusionâs Creator: Join host Craig Smith on episode #177 of Eye on AI as he explores the cutting-edge world of generative models in artificial intelligence with Björn Ommer, a visionary AI researcher and Head of Compute...
- Tweet from Emad acc/acc (@EMostaque): As my notifications are RIP some notes: 1. My shares have majority of vote @StabilityAI 2. They have full board control The concentration of power in AI is bad for us all I decided to step down to...
- GitHub - simonw/files-to-prompt: Concatenate a directory full of files into a single prompt for use with LLMs: Concatenate a directory full of files into a single prompt for use with LLMs - simonw/files-to-prompt
- GitHub - semanser/codel: âš Fully autonomous AI Agent that can perform complicated tasks and projects using terminal, browser, and editor.: âš Fully autonomous AI Agent that can perform complicated tasks and projects using terminal, browser, and editor. - semanser/codel
- Your Automated Product Security Engineer · Pixeebot: Pixeebot provides immediate and constant fixes that make your code more secure. Itâs like having another security-expert developer on your side.
- Brightwave: no description found
- GitHub - OwlAIProject/Owl: A personal wearable AI that runs locally: A personal wearable AI that runs locally. Contribute to OwlAIProject/Owl development by creating an account on GitHub.
- GitHub - microsoft/LLMLingua: To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.: To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss. - GitH...
- GitHub - OwlAIProject/Owl: A personal wearable AI that runs locally: A personal wearable AI that runs locally. Contribute to OwlAIProject/Owl development by creating an account on GitHub.
- no title found: no description found
- Google Cloud Platform: no description found
- no title found: no description found
- Google Cloud Platform: no description found
Latent Space â· #ai-announcements (5 messages):
- Insights into AI Giants: A new podcast episode mentioned in a tweet unveils juicy insights into OpenAI, Google, and Adept, although not all prepared questions were covered.
- AI In Action with Llama.cpp: The AI In Action event started with a live discussion about Llama.cpp, conducted by @363877777977376768.
- ChatGPTâs Unbundling Explored: A new essay on the Unbundling of ChatGPT argues that despite stagnant user growth, OpenAI may still succeed amid a trend where users seek specialized AI services. The essay also prompts OpenAI to potentially release Sora and GPT-5 to prevent mass unsubscriptions, and is available to read here.
Link mentioned: Tweet from swyx (@swyx): đ The Unbundling of ChatGPT https://latent.space/p/feb-2024 A whole year has passed with ~0 growth in ChatGPT user numbers. Instead, users are exploring a whole host of verticalized players for âŠ
Latent Space â· #llm-paper-club-west (14 messagesđ„):
- Technical Difficulties for LLM Paper Club: A member encountered issues with obtaining speaking rights for a session in the llm-paper-club-west and expressed this in the chat.
- Zoom to the Rescue: With the inability to secure speaking rights in Discord, members resorted to using Zoom for the paper club meeting.
- Speaker Rights Confusion: There was confusion regarding how to obtain speaking rights in the Discord channel for future paper club sessions.
- Meeting Over Before Resolution: The meeting concluded on Zoom before Discord speaking permissions could be resolved, leading to the consideration of facilitating future stages.
- Access Control Assistance Offered: Another member indicated that speaking rights could be assigned by a certain individual, suggesting a possible solution for future meetings.
Latent Space â· #ai-in-action-club (92 messagesđ„đ„):
-
Discussions on Tensor Operations and Transform Models: Members delved into tensor dimension handling, referred to humorously as âpad and prayâ, and pondered enhancing IDE support for dimension enforcement. The simplicity of envisioning transformer models as graphs with tensor operations and adjustable weights was highlighted as a mental model.
-
Unveiling Music with Slono: A Spotify link was shared, showcasing Slonoâs work aimed at evoking the ambience of nights winding down.
-
Coding and Commenting Context in LLMs: Discussion revolved around the value of comments in large language models (LLMs), emphasizing contextual information at varying levels of abstraction. Mentioned was the impact of comments on helping LLMs understand code.
-
Anticipation for Future Tech Showdowns: Musings about C++âs speed advantage over Python and a light-hearted prediction of a 2025 face-off between Luminal, Tinygrad, and Mojo were shared. There was also interest in learning more about the Luminal project.
-
AI in Action Club Schedule and Topics Shared: A Google Docs spreadsheet containing forthcoming topics for AI in Action Club sessions, including UI/UX patterns for generative AI, RAG architectures, and the impact of prompt formatting on model evaluation, was made accessible.
Links mentioned:
- LLM Visualization: no description found
- Graph-Mamba: Towards Long-Range Graph Sequence Modeling with Selective State Spaces: Attention mechanisms have been widely used to capture long-range dependencies among nodes in Graph Transformers. Bottlenecked by the quadratic computational cost, attention mechanisms fail to scale in...
- Did We Just Become Best Friends? GIF - Friends Bestfriends Yep - Discover & Share GIFs: Click to view the GIF
- AI In Action: Weekly Jam Sessions: 2024 Topic,Date,Facilitator,Resources,@dropdown,@ UI/UX patterns for GenAI,1/26/2024,nuvic,<a href="https://maggieappleton.com/squish-structure">https://maggieappleton.com/squish-stru...
- slono: Artist · 110 monthly listeners.
OpenAccess AI Collective (axolotl) â· #general (214 messagesđ„đ„):
-
GaLore Optimizer Heats Up the Chat: Members discussed the GaLore optimizer, which offers significant VRAM savings during full parameter finetuning. Concerns were raised about its âcoarsenessâ and potential to cause model over-training, likening the optimizerâs granularity to using âa very coarse optimizer with adjustable resolution that can update all weights of the model.â
-
Axolotl Discord Delves into Dataset Dilemmas: One member inquired about configurations for sharegpt and chatml in SFT and DPO, with another confirming that chatml is indeed what the model sees. Meanwhile, there was confusion over an example config setting in the Axolotl repo, potentially leading to improper dataset tokenization paths.
-
Anticipation for New Models and Optimizers: Amidst the technical talk, excitement was palpable about the release of Mistral v0.2 Base Model, boasting a larger context window of 32k; however, some lamented the restriction to Mistral 7B models. GaLore remains a hot topic with plans afoot to test over the weekend, fueling debates on optimizing strategies.
-
Publishing Predicament Posted: A member shared their dilemma about whether to release a preprint of their medical model while itâs undergoing its 3rd round of journal reviews. This sparked a discussion on the pros and cons of early sharing of research.
-
Open Calls and Company Queries: CHAI announced support for open source LLM community with prizes for LLM developers, while another member encouraged companies using Axolotl for their business applications to reach out to share their use cases discreetly.
Links mentioned:
-
@smangrul on Hugging Face: "đ€ PEFT v0.10.0 release! đ„đâš
Some highliđghts:
- FSDP+QLoRA and DeepSpeedâŠâ: no description found
- Fully Sharded Data Parallel: no description found
- Tweet from Xiang Yue (@xiangyue96): @MistralAI just released their v0.2 Baseđ±. @WenhuChen and I quickly evaluated a few benchmarks using the OpenCompass evaluation package. It seems that the capability dropped a little bit on nearly alâŠ
- DeepSpeed: no description found
- DeepSpeed: no description found
- Chai Prize: Complete and win 3 days unlimited messages!
- GitHub - mistralai-sf24/hackathon: Contribute to mistralai-sf24/hackathon development by creating an account on GitHub.
- axolotl/examples/mistral/config.yml at main · OpenAccess-AI-Collective/axolotl: Go ahead and axolotl questions. Contribute to OpenAccess-AI-Collective/axolotl development by creating an account on GitHub.
- trl/trl/trainer/dpo_trainer.py at 8534f0edf8608ad6bcbea9beefae380fa60ded77 · huggingface/trl: Train transformer language models with reinforcement learning. - huggingface/trl
- Third-party benchmark · Issue #6 · jiaweizzhao/GaLore: Hello, thank you very much for such excellent work. We have conducted some experiments using Llama-Factory, and the results indicate that Galore can significantly reduce memory usage during full paâŠ
OpenAccess AI Collective (axolotl) â· #axolotl-dev (15 messagesđ„):
- Unexpected TypeError in OpenAI Example: A member encountered a
TypeError
when trying to run an example from âexamples/openllama-3b/qlora.ymlâ related toLlamaRotaryEmbedding.forward()
receiving an unexpected keyword argument âseq_lenâ. - Room Redirection for Help: Another user redirected the member facing the TypeError to a specific help channel with the ID #1111279858136383509 for better assistance.
- Discussing Efficiency of LLM Fine-tuning: Members discussed the possibility of fine-tuning a 7b model in 27gb of memory, referencing a GitHub repository called torchtune that facilitates LLM Fine-tuning without relying on Huggingface libraries.
- Fine-tuning Ramifications: A member indicated the benefits of using native torch for efficiency while acknowledging the steeper learning curve compared to using libraries like Huggingface.
- Recommendation and Teasing About Huggingface: A pull request on torchtune was recommended for reviewing how to full fine-tune with less than 16GB of RAM, along with a playful jab at Huggingfaceâs expense.
Links mentioned:
- GitHub - pytorch/torchtune: A Native-PyTorch Library for LLM Fine-tuning: A Native-PyTorch Library for LLM Fine-tuning. Contribute to pytorch/torchtune development by creating an account on GitHub.
- Full finetune < 16GB by rohan-varma · Pull Request #527 · pytorch/torchtune: Context We'd like to enable a variant of full finetune that trains in < 16GB of RAM for users with consumer grade GPUs that have limited GPU RAM. This PR enables the full finetune to fit into ...
OpenAccess AI Collective (axolotl) â· #general-help (14 messagesđ„):
-
Mixtral Fine-Tuning Techniques Unclear: A member is looking for advice on how to target the router layers in Mixtral with galore but has not found clear documentation online. They mentioned an intention to try -block_sparse_moe and -self_attn but later lamented that it does not work with Zero3.
-
Coding Assistant Training for Mixtral-7B: A user asked how to train and fine-tune a Mixtral-7B model to be a coding assistant using runpod, python, etc., questioning the tools, IDEs, and concepts needed to train a Mixtral model on their own hardware. Another member acknowledged the complexity of the question.
-
Data Preprocessing Error in Axolotl: While attempting to pre-process data with Axolotl, a member faced a KeyError related to the âinstructionâ key, even though they confirmed all rows included the key. Another participant suggested there might be rows missing the key, but this was not the case upon verification.
-
Fine-Tuning Issues with TheBlokeâs Model: A user encountered an error while trying to fine-tune TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ model using auto train, sharing error outputs indicating a FileNotFoundError in subprocess.py on Windows. They also shared a link to the modelâs repository on Hugging Face.
-
Inquiry About âGemaâ Compatibility with PyTorch: A member asked if âgemaâ is still incompatible with PyTorch, seeking up-to-date information on the issue. No clear consensus or answer was provided within the discussed messages.
Link mentioned: TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ · Hugging Face: no description found
OpenRouter (Alex Atallah) â· #announcements (8 messagesđ„):
-
Midnight 70B Launches to Acclaim: Midnight 70B is the newest and most anticipated model optimized for storytelling and roleplay, a successor to Rogue Rose and Aurora Nights. Available at OpenRouter, with an introductory price of $0.009/1k tokens, a 25% discount.
-
New Features for Cost Monitoring and Management: OpenRouter introduced Usage Analytics with a new chart feature displaying the daily spending on models, as well as a Billing Portal accessible via the usersâ account for managing credits and invoices.
-
Price Adjustments for Noromaid Mixtral and Bagel: Due to the high cost of running them, the discounts on Noromaid Mixtral and Bagel models have been removed, with the former priced at $0.008/1k tokens and the latter at $0.00575/1k tokens.
-
Request for Expanded Context Length: A user expressed a desire to use Noromaid Mixtral at its original 32K context length, stating that the current 8K is insufficient.
-
Database Downtime Due to DDoS: The OpenRouter platform experienced database issues due to a DDoS attack that bypassed Cloudflare, but stability was restored as per the latest update.
Link mentioned: Midnight Rose 70B by sophosympatheia | OpenRouter: A merge with a complex family tree, this model was crafted for roleplaying and storytelling. Midnight Rose is a successor to Rogue Rose and Aurora Nights and improves upon them both. It wants to produâŠ
OpenRouter (Alex Atallah) â· #general (208 messagesđ„đ„):
- Claude 3 Self-Moderated Version Endorsed: Claude 3âs self-moderated version is recommended over the site-filtered version due to better selectivity in rejection.
- The Case of the Underappreciated Grok: Grok models sparked a debate, with some users arguing that while itâs not as powerful as Mixtral, itâs a solid base model when not compared to fine-tuned alternatives, despite being more expensive.
- Fine-Tuning Model Favorites: Multiple users discussed their experiences with different models for roleplay and tasks, expressing interest in potentially extended contexts beyond 8k for models like Midnight Rose and the consistent quality from open source models like Haiku on a budget.
- OpenRouter and Model Performances: Some users reported differences in the quality of model completions when using models like Opus and Haiku on OpenRouter versus direct API access and are looking into whether it involves default system prompts.
- Perplexity Citations in OpenRouter: Discussions around not receiving citation data via OpenRouter when using Perplexity emerged, with acknowledgement that while the data exists, itâs not currently returned due to API response consistency.
Links mentioned:
- Tweet from Delip Rao e/Ï (@deliprao): I look at this and don't walk way thinking Grok is better. As a pragmatic person, I look at it and wonder why bother with Grok (314B) when you have Mixtral with almost similar performance and is a...
- xAI Grok: no description found
- world_sim: no description found
- imgur.com: Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and ...
- sophosympatheia/Midnight-Rose-70B-v2.0.3 · Hugging Face: no description found
Eleuther â· #general (64 messagesđ„đ„):
-
GPU Battlegrounds: Appleâs MPS vs. PyTorch: A member is working diligently on improving the MPS backend in PyTorch, discussing both the significance of the work for local model testing and finetuning, and the potential widespread performance benefits. Despite challenges, they remain committed, highlighting an issue in tensor copying that has affected MPS since September 2022.
-
Debate on Token Block Strategies for LLM Pretraining: Members engaged in a nuanced debate about the best way to form token blocks for language model pretraining, deliberating between overlapping versus non-overlapping sequences. Multiple perspectives were offered, taking into account the importance of beginning-of-sentence tokens and the implication of each method on training efficacy.
-
AMD vs. Nvidia GPU Drivers â A Market Challenge: There was a lively discussion about the perceived inadequacy of AMD Radeon drivers compared to Nvidiaâs, and its market implications. Participants noted that consumer GPU drivers often offload compatibility work elsewhere, debated the potential for AMD open-sourcing their drivers, and considered activist investor intervention to drive corporate change at AMD.
-
Concerns Over Public Speaking Prowess in AI Industry: There was a brief commentary comparing the public speaking abilities of well-known figures in the tech industry. Lex Fridman was mentioned as a point of reference in discussing the styles of other speakers.
-
Merger Mania in Machine Learning: A member introduced a new merge method they are developing for combining models that could potentially surpass existing methods like DARE. They noted that their approach is in early stages and that more testing is needed to confirm its effectiveness.
Link mentioned: Ratatouille âą Flashback GIF - Ratatouille Flashback Childhood - Discover & Share GIFs: Click to view the GIF
Eleuther â· #research (105 messagesđ„đ„):
-
DenseFormer with Depth-Weighted-Average: A new tweak in transformer architecture, named DenseFormer, adds a depth-weighted average step to significantly improve the perplexity of large-scale models without enlarging their size. A discussion on Hacker News indicates skepticism over its scalability, yet proponents argue for its potential.
-
Mamba Meets Zigzag: Addressing the inherent issues of diffusion models in scalability and complexity, a study introduces Zigzag Mamba, a variant that enhances memory use and speed in processing high-resolution visual datasets. The study, contributed by <@193386166517628929>, focuses on optimizing sequence flattening methods for improved performance.
-
Putting Pieces Together in DiPaCo: The proposition of the DiPaCo architecture innovates machine learning model training by using a path composition approach to ensure robustness and reduced communication needs across potentially disconnected computational workers. This suggests a potential path for decentralized machine learning model training.
-
Potential of âProof-of-Training-Dataâ: Addressing concerns about model provenance and the risk of poisoned samples, an arXiv abstract explores the idea of âProof-of-Training-Data,â which would allow verification of the data and computation used to train neural networks.
-
Training Bias with BiTFiT: Research has been conducted on applying BitFit to modern large language models like LLama2/Mistral. The posted study demos efficient fine-tuning of LLaMA models by initializing new bias terms and freezing other parameters, thereby enhancing parameter efficiency.
Links mentioned:
- Catformer: Designing Stable Transformers via Sensitivity Analysis: Transformer architectures are widely used, but training them is non-trivial, requiring custom learning rate schedules, scaling terms, residual connections, careful placement of submodules such as n...
- DenseFormer: Enhancing Information Flow in Transformers via Depth Weighted Averaging: The transformer architecture by Vaswani et al. (2017) is now ubiquitous across application domains, from natural language processing to speech processing and image understanding. We propose DenseForme...
- no title found: no description found
- ZigMa: Zigzag Mamba Diffusion Model: The diffusion model has long been plagued by scalability and quadratic complexity issues, especially within transformer-based structures. In this study, we aim to leverage the long sequence modeling c...
- Manipulating Chess-GPTâs World Model: Manipulating Chess-GPTâs World Model
- Explicit Inductive Bias for Transfer Learning with Convolutional Networks: In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch. When using fine-tuning, the underlying assumption is that the pre-traine...
- DiPaCo: Distributed Path Composition: Progress in machine learning (ML) has been fueled by scaling neural network models. This scaling has been enabled by ever more heroic feats of engineering, necessary for accommodating ML approaches th...
- Tools for Verifying Neural Models' Training Data: It is important that consumers and regulators can verify the provenance of large neural models to evaluate their capabilities and risks. We introduce the concept of a "Proof-of-Training-Data":...
- The Description Length of Deep Learning Models: Solomonoff's general theory of inference and the Minimum Description Length principle formalize Occam's razor, and hold that a good model of data is a model that is good at losslessly compress...
- Sphere Neural-Networks for Rational Reasoning: The success of Large Language Models (LLMs), e.g., ChatGPT, is witnessed by their planetary popularity, their capability of human-like question-answering, and also by their steadily improved reasoning...
- OmniNet: Omnidirectional Representations from Transformers: This paper proposes Omnidirectional Representations from Transformers (OmniNet). In OmniNet, instead of maintaining a strictly horizontal receptive field, each token is allowed to attend to all tokens...
- GitHub - lawrence-cj/LLaMA-DiffFit: Efficient Fine-tuning LLaMA Using DiffFit within 0.7M Parameters: Efficient Fine-tuning LLaMA Using DiffFit within 0.7M Parameters - lawrence-cj/LLaMA-DiffFit
Eleuther â· #interpretability-general (5 messages):
-
SVM Kernels Explored for Pythia Embeddings: One member reported that after running several SVM kernels on Pythiaâs input embeddings, the sigmoid kernel outperformed rbf, linear, and poly kernels. Although the finding wasnât attributed to cleverness but to trial and error, the user expressed a desire for intuition to streamline the process.
-
SVM vs. Logistic Regression: A participant admitted they lack knowledge about SVMs and would have rather used logistic regression for classification problems.
-
Tokengrams Repository Update: The Tokengrams project has progressed to a point of usability, as indicated with a shared link to the GitHub repository. This tool is for efficiently computing and storing token n-grams from large corpora. GitHub - EleutherAI/tokengrams.
-
Chess-GPT Interventions Summarized: A link to a blog post was shared, detailing the Chess-GPT project which uses a language model to predict chess moves from PGN strings and estimates the skill level of players. The post describes the use of linear probes to validate the modelâs computations and mentions Chess-GPTâs capability of playing chess at approximately 1500 Elo. Chess GPT Interventions.
Links mentioned:
- Manipulating Chess-GPTâs World Model: Manipulating Chess-GPTâs World Model
- GitHub - EleutherAI/tokengrams: Efficiently computing & storing token n-grams from large corpora: Efficiently computing & storing token n-grams from large corpora - EleutherAI/tokengrams
Eleuther â· #lm-thunderdome (26 messagesđ„):
-
Potential Variance in Evaluation Results: A member discussed encountering variations in evaluation results, with about half the runs matching exactly and the other half differing by approximately 0.5% when comparing Hugging Face (HF) transformers with Megatron-DeepSpeed evaluations. They mentioned that checking a forward pass numerically could help identify if there are fundamental differences in the implementations.
-
Determinism in Attention Mechanisms Questioned: In a quest to understand the discrepancies in evaluation results, a member questioned whether flash attention could contribute to variation, but it was clarified that flash attention is deterministic in forward pass. They also speculated if differences in doing fused kqv multiplications could be causing numerical discrepancies, potentially due to bfloat16.
-
Minecraft as an RL Benchmark for LLM Collaboration: One member highlighted a GitHub repository, GitHub - danijar/diamond_env, which represents a standardized Minecraft Diamond Environment for Reinforcement Learning. They also referenced an issue on the Voyager projectâs GitHub discussing potential collaboration with LM harness projects.
-
Inverse-Scaling Evaluation Pipeline Inquiries: A member inquired about adapting a multi-choice problem-solving approach from the inverse-scaling evaluation pipeline to work with the lm-eval-harness. A code snippet from their GitHub repository was provided for discussion, leading to an explanation of how logits are treated in the context of an answer choice in the harness.
-
Question on BB-Q Lite Task Scoring Method in BigBench: A member questioned whether the bbq_lite subset in the BigBench task uses a straight accuracy scoring method and proposed that the complexity of its original bias scoring mechanism was possibly avoided in implementation. It was suggested to refer to a specific pull request for an alternative BBQ implementation in the lm-evaluation-harness.
Links mentioned:
- inverse-scaling-eval-pipeline/eval_pipeline/models.py at main · naimenz/inverse-scaling-eval-pipeline: Basic pipeline for running different sized GPT models and plotting the results - naimenz/inverse-scaling-eval-pipeline
- Implement a way test local models · Issue #149 · MineDojo/Voyager: Hello, Wonderful work on Voyager. Please consider added local model support (instead of openai package - using something like Python requests package to a localhost local model using openai complet...
- GitHub - danijar/diamond_env: Standardized Minecraft Diamond Environment for Reinforcement Learning: Standardized Minecraft Diamond Environment for Reinforcement Learning - danijar/diamond_env
- Add various social bias tasks by oskarvanderwal · Pull Request #1185 · EleutherAI/lm-evaluation-harness: This PR implements various popular benchmarks for evaluating LMs for social biases. I also aim to have these validated where possible: e.g., by comparing with existing implementations or results, o...
Eleuther â· #multimodal-general (3 messages):
- Inquiry about Multimodal Embedding Theories: A member asked for theoretical works on multimodal embedding spaces, indicating a broad interest without looking for anything specific.
- Insight on Embeddings in Stable Diffusion Culture: Stable Diffusionâs subculture treats embeddings similarly to IMG2IMG workflows in their diffusion models, notably SDXL IMG2IMG which might offer a lead for research.
- Clarification on Terminology: The term âIMG2IMGâ could be confused with âinit imageâ usage, especially due to the use of this phrase in the Automatic1111 web UI; alternatives like âimage promptingâ or âimage variationsâ were suggested.
CUDA MODE â· #general (9 messagesđ„):
- Discord stage channel struggles: During a GTC event, the Discord stage channel encountered screen sharing issues, leading to a quick resolution by switching to a voice channel. A suggestion was made to use voice channels by default for future lectures.
- Google Meet over Discord: One member expressed frustration with Discord streams and proposed using Google Meet for future sessions, seeking opinions and contacts at Discord for feedback on stream stability.
- ML and CUDA Connection: A member inquired about when CUDA programming becomes necessary in ML, as theyâve never had to delve that deep in their ML practice.
- CUDA Programming for Speed: A link to a YouTube lecture was shared to explain profiling CUDA kernels in PyTorch: Lecture 1 How to profile CUDA kernels in PyTorch. Accompanying resources include slides on Google Docs and a GitHub code repository.
- Understanding When to Drop to CUDA: In response to the lecture, a member summarized that CUDA is necessary for performance gains when PyTorch is not fast enough, likening it to writing in C for CPU programs. Another member agreed with this takeaway.
Link mentioned: Lecture 1 How to profile CUDA kernels in PyTorch: Slides: https://docs.google.com/presentation/d/110dnMW94LX1ySWxu9La17AVUxjgSaQDLOotFC3BZZD4/edit?usp=sharingCode: https://github.com/msaroufim/cudamodelecture1
CUDA MODE â· #triton (27 messagesđ„):
-
Debugging Triton Performance Issues: A member sought advice on debugging performance issues with Triton kernels, comparing unslothâs fast_embedded_rope performance unfavorably to eager PyTorch on an A10G with contiguous tensors.
-
Assurances on Triton Compiler Bug Resolution: Members discussed historical Triton compiler bugs referenced in code comments, with clarification provided that these were not current issues. Additionally, barriers such as
debug_barrier()
were explained as necessary for correctness, similar to syncthreads in CUDA. -
Triton Operations May Be Phased Out: A contributor indicated that Triton operations might be removed in the future, advising against submitting PRs to address related issues, and confirmed that tutorials would remain available for learning purposes.
-
Possible Benchmarking From Meta: It was mentioned that Meta could potentially introduce an op benchmark for Triton, which would provide reference implementations for developers to utilize.
-
Proposal for Collaboration on Architecture Optimization: A new prototype folder in the
torchao
repository was suggested to a member, with the intention of merging their work and collaborating on API design for efficient kernel usage.
Links mentioned:
- GitHub - pytorch-labs/ao: torchao: PyTorch Architecture Optimization (AO). A repository to host AO techniques and performant kernels that work with PyTorch.: torchao: PyTorch Architecture Optimization (AO). A repository to host AO techniques and performant kernels that work with PyTorch. - pytorch-labs/ao
- triton/python/triton/ops/cross_entropy.py at fb8983e5a2754ce793ab8d14ed0c333bfd9ba197 · openai/triton: Development repository for the Triton language and compiler - openai/triton
CUDA MODE â· #cuda (7 messages):
-
Seeking the Blackwell NVIDIA Whitepaper: A user inquired about the release of the Blackwell NVIDIA whitepaper but no direct information on this topic was provided.
-
GTC Session Details Shared: A member shared a link to the GTC Session Catalog highlighting the upcoming workshops, AI conference and Expo dates, and the keynote scheduled for March 17-21 in San Jose, CA and virtually.
-
CUDA Toolkit and CuDNN Installation Guidance: A user asked if it was okay to install a CUDA toolkit of a higher version than shown on
nvidia-smi
and about post-installation steps for CuDNN. Another member mentioned that one needs to add CuDNN to the path or copy its files to the toolkit directory. -
Link Omission for Toolkit/Driver Compatibility: A member failed to include a link when referring to toolkit/driver compatibility. The link was subsequently provided, directing users to NVIDIAâs CUDA Compatibility Guide to understand the minimum driver versions required for each toolkit.
-
Call for Favorite CUDA Kernels: A member invited others to submit favorite CUDA kernels that could optimize operations for large language models (LLMs) to possibly feature in a Thunder tutorial. The discussion links to a GitHub issue on Lightning AIâs repository that discusses this feature.
Links mentioned:
- CUDA Compatibility :: NVIDIA GPU Management and Deployment Documentation: no description found
- NVIDIA #GTC2024 Conference Session Catalog: Register now. Streamed online. March 18-21, 2024.
- Support for CUDA kernels · Issue #70 · Lightning-AI/lightning-thunder: đ Feature Hi there đ From the main readme file I noticed that Thunder except custom kernels, but only the ones that are written in Trition. Is there a plan to support CUDA kernels? Motivation I'...
CUDA MODE â· #algorithms (3 messages):
-
Link to New Matrix Decomposition Paper: A paper on Arrow Matrix Decomposition by researchers Lukas Gianinazzi, Alexandros Nikolaos Ziogas, and others was shared, providing insights on a novel approach to distributed sparse matrix multiplication. Access the research here.
-
GitHub Repository for Arrow Matrix Decomposition: The Arrow Matrix Decomposition code has been made available on GitHub for those interested in communication-efficient distributed sparse matrix multiplication. The repo is available at this GitHub link.
Links mentioned:
- Arrow Matrix Decomposition: A Novel Approach for Communication-Efficient Sparse Matrix Multiplication: We propose a novel approach to iterated sparse matrix dense matrix multiplication, a fundamental computational kernel in scientific computing and graph neural network training. In cases where matrix s...
- GitHub - spcl/arrow-matrix: Arrow Matrix Decomposition - Communication-Efficient Distributed Sparse Matrix Multiplication: Arrow Matrix Decomposition - Communication-Efficient Distributed Sparse Matrix Multiplication - spcl/arrow-matrix
CUDA MODE â· #beginner (3 messages):
- GPU vs CPU Architecture Complexity: A member questioned whether NVIDIA GPU architecture is simpler than that of modern CPUs. Another member clarified that GPUs are specialized for high throughput of simple operations, in contrast to CPUs, which handle a lower throughput but more complex operations.
CUDA MODE â· #pmpp-book (12 messagesđ„):
- Accountability in Progress: A member committed to completing and discussing exercises for Chapters 3 & 4 of their study material after long work shifts, using public accountability as a motivator.
- Resource Sharing for Exercise Answers: It was suggested to create a shared Google Doc to compile agreed-upon exercise answers for cross-checking and as a resource for all members.
- Exclusive Access to Exercise Solutions: One member proposed starting a shared document with exercise solutions, offering access to those who show their initial attempt to maintain the challenge integrity.
- Experience Sharing Among Members: Members exchanged their backgrounds regarding experience with C++ and multithreading, with varying focus on CUDA and broader parallel programming concepts for applying to various technologies.
- Collaborative Learning Through Shared Solutions: A link to a Google Doc containing Chapter 2 exercise solutions was shared, accessible to those who DM the creator after attempting the exercises themselves. Ch 2 Exercise Solutions
CUDA MODE â· #youtube-recordings (5 messages):
-
Inquiring About Lecture 11 Upload: A user inquired when the recording for Lecture 11 would be uploaded. Another user responded that it will be on YouTube once Mark has time, sharing a temporary link to watch it on OneDrive.
-
Lecture 11 Now on YouTube: It was confirmed that Lecture 11 had been uploaded to YouTube.
-
Seeking Sparsity Lecture Slides: A user requested the slides for the Sparsity lecture, asking if they were public and for a link. Another user pinged a specific member to share the slides if possible.
Link mentioned: no title found: no description found
CUDA MODE â· #ring-attention (21 messagesđ„):
- Diving Back into Ring Attention: A member mentioned they will focus on Ring Attention for the next ten days to run some tests and explore training details.
- Clarifying Meetup Time Post-Daylight Saving: A member asked about the exact time for regular meetings, to which another member replied with the scheduled timing using a timestamp: <t:1711299600:t>.
- Potential Workspace Upgrade Suggestion: One member proposed the idea of increasing the workspace folder disk quota, and it was followed by a discussion about possibly migrating to a new machine with more storage.
- Sharing Progress and Workspace Access: Several links to Wandb.ai were shared showing progress on runs related to Axolotl. SSH configuration was updated and discussions about reinstalling conda and re-adding SSH keys took place.
- Technical Adjustments for Collaboration: There were conversations regarding conda reinstallation, with the base environment being moved under
/workspace/miniconda3
. SSH access was being coordinated, with requests for public keys to be sent for those needing to connect for the first time.
Links mentioned:
- iron-bound: Weights & Biases, developer tools for machine learning
- iron-bound: Weights & Biases, developer tools for machine learning
CUDA MODE â· #off-topic (15 messagesđ„):
- GPU Grins Back: A tweet showing the new Blackwell GPUs seemingly having a smiley face pattern was highlighted. A Twitter link was shared for amusement.
- NVIDIAâs Best Chips Yet: The B200 accelerators with their impressive specifications were discussed, citing them as the best in the market combined with their CUDA ecosystem. An AnandTech article detailing NVIDIAâs Blackwell architecture was shared.
- Hidden in Plain Sight: A member revealed the existence of an âNVIDIA Developerâ Discord server linked to a GitHub discussion (GitHub link) about the CUTLASS library.
- Diving Into New Data Types: Reference materials were requested for new float/int data types in deep learning, leading to the sharing of an FP8 introduction paper and an OCP standardization post about various companies standardizing next-generation narrow precision data formats.
- A Sea of Standards: Discussion surrounded the implementation variety and lack of IEEE standard for new floating-point numbers, with a notable absence of Google from the consortium agreeing on new formats.
Links mentioned:
- NVIDIA Blackwell Architecture and B200/B100 Accelerators Announced: Going Bigger With Smaller Data: no description found
- Tweet from Tanishq Mathew Abraham, Ph.D. (@iScienceLuvr): Why isn't anybody talking about the fact that the new Blackwell GPUs are literally smiling at us lol
- New Discord Channel! · NVIDIA/cutlass · Discussion #1086: As a means to further improve the user experience and education around CUTLASS, we have created a new Discord channel! Click the link to join! See you there and we thank you for all of your support :)
- Open Compute Project: no description found
CUDA MODE â· #triton-puzzles (37 messagesđ„):
-
Stuck on Puzzle 4: A member is debugging Puzzle 4 with a discrepancy between the expected and actual test results. They shared their print statements to cross-check their answers, mentioning that the issue occurs when using torch for the outer sum.
-
Insights on Puzzle 10 batching: In response to a query about whether to parallelize on the batch dimension for Puzzle 10, itâs mentioned that the focus should be on keeping the kernel in fast shared memory, not necessarily on parallelizing the batch dimension, although tensor cores could also be utilized.
-
Initializing Arrays with Negative Infinity: A discussion took place about how to initialize an array with all
-inf
values in Triton. Using functions liketl.full
and substitutes like a large negative number are suggested solutions, astl.arange(0, B1) * float("-inf")
results in NaNs due to 0 * -inf. -
Challenges with Indexing in Triton: Queries about single-position indexing and slicing in arrays lead to the clarification that such operations are not supported in Triton. This is due to how arrays and memory are handled, and workarounds for these limitations involve avoiding direct indexing or employing associative scans.
-
Puzzle 3 Exploration Reveals Understanding: A self-described ânoobâ question about Puzzle 3 led to a user figuring out their misunderstanding on their own. The issue revolved around loading and adding vectors within the Triton kernel.
Interconnects (Nathan Lambert) â· #news (11 messagesđ„):
-
Mistral Hints at New Model at Hackathon: Chat messages link to tweets by @AlexReibman and @adarshxs indicating that Mistral dropped a new model at the @cerebral_valley hackathon.
-
No Magnet for the New Release Announcement: @natolambert notes that the new model release is absent any magnet links, expressing a bit of disappointment.
-
Mistral 7B v0.2 Base Model Details Revealed: @xeophon. shares direct links from @MistralAILabs that detail the new release of Mistral 7B v0.2 Base, including specifics on its configuration and where to find guidance on how to fine-tune the model.
-
Reflection on Mistralâs Growth: @xeophon. comments casually on the rapid growth and development of Mistral, perceived through the frequency of new model releases.
-
Clarification on Mistral Model Versions: Members @philpax and @xeophon. discuss the iterations of Mistral models, clarifying that the recently mentioned Mistral-0.2 is not a completely new model but related to a previous instruct version, with @philpax initially misunderstanding the versioning before correcting himself.
Links mentioned:
- Tweet from Alex Reibman đïž (@AlexReibman): Mistral casually dropping a new model at the @cerebral_valley hackathon
- Tweet from Adarsh (@adarshxs): yo @MistralAI dropping a new model today!!
- Tweet from Mistral AI Labs (@MistralAILabs): New release: Mistral 7B v0.2 Base (Raw pretrained model used to train Mistral-7B-Instruct-v0.2) đž https://models.mistralcdn.com/mistral-7b-v0-2/mistral-7B-v0.2.tar đž 32k context window đž Rope Theta...
Interconnects (Nathan Lambert) â· #ml-questions (2 messages):
- Nemo Checkpoint Conversion Inquiry: A member inquired about how to convert a Nemo checkpoint to be compatible with Hugging Face for inference purposes.
- Exploring Checkpoint Wrapping: The same member also asked for advice on wrapping Nemo checkpoints for use, potentially looking for a wrapper or an interface.
Interconnects (Nathan Lambert) â· #ml-drama (29 messagesđ„):
-
Stability AI CEO Steps Down: Stability AI announced that CEO Emad Mostaque resigned from his CEO role and board position to pursue decentralized AI, with Shan Shan Wong and Christian Laforte stepping in as interim co-CEOs. Mostaqueâs tweets hinted at a focus on #DecentralizedAI and governance in AI.
-
Stability AI Internal Struggles and Speculations: Discussion in the chat suggests that Stability AI has faced longstanding internal issues, leading to Emad Mostaqueâs departure from the company. Members debated whether Mostaqueâs actions were a grift or a result of the companyâs continuous scramble to find direction.
-
The Fine Line Between Contribution and Grift: Chat members shared perspectives on the nature of Stability AIâs operations, with some feeling they appropriately licensed algorithms developed by academics, while others saw it as questionable given the academicsâ minor compute contributions.
-
AI Communityâs Take on Emad Mostaqueâs Departure: Opinions varied about Emad Mostaqueâs legacy, with some perceiving him as a grifter while acknowledging the legitimate aspects of Stability AIâs business.
-
Alternatives for AI Academics: A point was raised regarding the limited options for academics in AI, indicating a preference for collaboration with companies like Stability AI to have a more significant impact than possible with limited resources in academia.
Links mentioned:
- Tweet from clem đ€ (@ClementDelangue): Should we acquire Stability and open-source SD3?
- Stability AI Announcement â Stability AI: Earlier today, Emad Mostaque resigned from his role as CEO of Stability AI and from his position on the Board of Directors of the company to pursue decentralized AI. The Board of Directors has appoin...
- Tweet from Emad acc/acc (@EMostaque): * by they - my shares have full board control aha So its a decision by me as it were We should have more transparent & distributed governance in AI as it becomes more and more important Its a hard...
- Tweet from Edward Grefenstette (@egrefen): This week in AI: Quis griftat ipsos griftatores?
- Tweet from Emad acc/acc (@EMostaque): Also there is no presale, TGE or token fml If there was though I would call it stable coin đ
- Tweet from Emad acc/acc (@EMostaque): Not going to beat centralized AI with more centralized AI. All in on #DecentralizedAI Lots more đ âïž Quoting Stability AI (@StabilityAI) An announcement from Stability AI: https://bit.ly/43zsVj...
- News â Stability AI: Discover the latest product announcements, company updates, and industry news with Stability AI. We build the foundation to activate humanityâs potential.
- Tweet from Emad acc/acc (@EMostaque): As my notifications are RIP some notes: 1. My shares have majority of vote @StabilityAI 2. They have full board control The concentration of power in AI is bad for us all I decided to step down to...
Interconnects (Nathan Lambert) â· #random (8 messagesđ„):
- RL Generalist Agent Discussion: A member linked to a discussion on Twitter about the philosophy of creating a âgeneralist agentâ in Reinforcement Learning (RL), considering the practical and principal possibilities of realizing such an agent.
- Misinterpretation of Antitrust Laws Online: Nathan Lambert expressed frustration over the general publicâs misunderstanding of antitrust laws in regards to recent tech lawsuits and debates.
- Criticism of Apple Antitrust Lawsuits: Several messages by Nathan Lambert criticize the FTCâs lawsuits against Apple, suggesting that people cheering them on should read more informed opinions, such as those by Ben Thompson.
- Disagreement with Twitter Sentiments on FTC vs. Apple: In an argument on Twitter, Nathan Lambert holds the position that the FTC lawsuits against Apple are misplaced, supported by a tweet implying that the effects of regulation on a company like Apple are negligible.
- Conversation About the Merits of FTC Lawsuits: A member named twkillian shared views that the recent FTC lawsuits may questionably allege anti-competitive behavior but doubted that such behavior was intended to worsen the marketplace or disadvantage other products.
Links mentioned:
- Tweet from Machine Learning Street Talk (@MLStreetTalk): We just dropped the show with @MinqiJiang and @MarcRigter and discuss the philosophy of whether it is possible, in principle and in practice to build a "generalist agent" in RL.
- Tweet from el (@fentpot): @nagolinc @norabelrose @natolambert iâm sorry but the second order effects are so negligible here for a large company like Apple. most founders would be thrilled to get anywhere near Appleâs size even...
Interconnects (Nathan Lambert) â· #reads (19 messagesđ„):
- Insights from Anthropicâs CEO: Nathan Lambert highlighted an interview titled âAnthropic CEO on Leaving OpenAI and Predictions for Future of AIâ discussing Dario Amodeiâs predictions on the AI industry for 2024 and beyond.
- Reflecting on Early OpenAI Visions: Nathan remarked that early OpenAI contributors had a clear vision of the compute trajectory in AI.
- Exploring Interview Content: Nathan Lambert expressed an interest in Mistralâs CEO, Arthur Mensch, for potential insights into company culture, directing attention to a âFireside Chat w/ Mistral CEO, Arthur Menschâ.
- Defining AGI amid Debate: The chat touched on the difficulty of defining AGI, with Nathan stating a personal threshold that includes GPT-4 as AGI, sparking a discussion on what constitutes true general intelligence.
- Februaryâs AI Highlights with Latent Space: Xeophon shared a link to Latent Spaceâs monthly recap for February 2024 which covers significant AI news and upcoming events, including âAI UX 2024â and references the rapid user growth of ChatGPT in early 2023. The recap can be found here.
Links mentioned:
- The Unbundling of ChatGPT (Feb 2024 Recap): Peak ChatGPT? Also: our usual highest-signal recap of top items for the AI Engineer from Feb 2024!
- Fireside Chat w/ Mistral CEO, Arthur Mensch: Join us to hear from Arthur Mensch, Co-founder & CEO of Mistral, in conversation w/ Elad Gil.ââTopics covered will include:âOpen source & LLMsâAgents and mul...
- Anthropic CEO on Leaving OpenAI and Predictions for Future of AI: Dario Amodei is the Co-Founder and CEO of Anthropic. In the episode, we discuss detailed predictions on the AI industry for 2024, 2025, and beyond. Dario dis...
LangChain AI â· #general (42 messagesđ„):
- The Perils of âYou Up?â in AI: A user humorously highlighted the complex prerequisites for a chatbot to respond to the question âyou up?â suggesting it should first solve the space-time continuum and ensure a secure connection.
- Deciding Which Chain to Use: There was a conversation about using multiple chains for a task, with a member suggesting that the decision of whether to query a SQL database or a vector database should be based on expected result sizes.
- Technical Struggles with RunnableParallel and Streamlit: A member encountered a âmissing ScriptRunContextâ Error when attempting to use RunnableParallel with a streamlit app, indicating possible compatibility issues between the two.
- Launching a Learning Platform for RAG: A user shared a link to an upcoming free resource to learn Retrieval-Augmented Generation (RAG) for programming with AI, mentioning OpenAI, LangChain, Chroma, and Python as some of the technologies participants will work with. Intro to AI for Developers
- Vector Database Choices and Clustering Algorithms for Information Grouping: A user sought advice on choosing between ChromaDB and Qdrant for vector databases and between density-based or centroid-based clustering algorithms for semantic-based cluster grouping of key information in documents.
Links mentioned:
- A Practical Introduction to AI for Developers â TakeHomes Library: no description found
- ConversationChain default prompt leads the model to converse with itself · Issue #6138 · langchain-ai/langchain: System Info langchain==0.0.195 python==3.9.6 Who can help? @hwchase17 Information The official example notebooks/scripts My own modified scripts Related Components LLMs/Chat Models Embedding Models...
- The HEALTHCHECK flag "start-interval" is not recognized in Docker version 24.0.5, build ced0996 · Issue #6938 · docker/for-mac: Description The docker documentation indicates that HEALTHCHECK has a flag called "start-interval" (docs) Actually using that flag in a Dockerfile causes an error Reproduce Use this Dockerfi...
- Tool error handling | đŠïžđ Langchain: Using a model to invoke a tool has some obvious potential failure modes. Firstly, the model needs to return a output that can be parsed at all. Secondly, the model needs to return tool arguments that ...
- Issues · langchain-ai/langchain: đŠđ Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues · langchain-ai/langchain: đŠđ Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues · langchain-ai/langchain: đŠđ Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues · langchain-ai/langchain: đŠđ Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
LangChain AI â· #langserve (1 messages):
- Inquiring about Client-Side Execution with Langserve: A member asked if itâs possible to use a langserve-hosted runnable with tools that are executed on the clientâs side. There were no further details or responses provided.
LangChain AI â· #share-your-work (12 messagesđ„):
-
Innovative Way to Extend LLM Output Beyond Limits: A suggestion was made about a workaround to surpass the 4k output token limit of GPT-4-Turbo by detecting the stop reason as âlengthâ and sending a follow-up request with the original and generated prompt, allowing for continued generation.
-
Bedrock and Python Integration Guide: A comprehensive guide on leveraging Bedrock in combination with Python has been introduced. Those interested can access the full article here.
-
Announcing SimplyAnalyze for LLMs Analytics: SimplyAnalyze.ai was presented, a service that integrates with LangChain to analyze LLM conversations across various company departments. The creators shared contact information for those interested in their free developer preview and you can get in touch through their website.
-
Exploring Agent Tree Search with Langchain: An informative post has been shared about using Langchain to improve decision-making with Language Models. You can read the full article here.
-
Langchain-based Chatbot with Enhanced Capabilities: A local character AI chatbot was updated, featuring improvements in CSV parsing, NER parsing, web scraping, and document fetching. Access the repository to explore the enhancements on GitHub.
Links mentioned:
- GitHub - Haste171/langchain-chatbot: AI Chatbot for analyzing/extracting information from data in conversational format.: AI Chatbot for analyzing/extracting information from data in conversational format. - Haste171/langchain-chatbot
- GitHub - ossirytk/llama-cpp-chat-memory: Local character AI chatbot with chroma vector store memory and some scripts to process documents for Chroma: Local character AI chatbot with chroma vector store memory and some scripts to process documents for Chroma - ossirytk/llama-cpp-chat-memory
LangChain AI â· #tutorials (5 messages):
- Discover LangGraph Control over Chatbots: A YouTube video titled âHow To Control Your Chatbot Actions and Prompt System: LangGraphâ was shared, demonstrating ways to build an agent within Langchain and automate your chatbot experience. The video is available at How To Control Your Chatbot Actions and Prompt System: LangGraph.
- Mr. Beast Adventures into AI Cookbooks: A creative YouTube video âMr. Beast Meets Mistral: AI Created a Cookbook Based on His Wildest Stunts!â was shared, showcasing an AI-generated cookbook inspired by the popular YouTuberâs stunts. Check out the entertaining concept at Mr. Beast Meets Mistral.
- Spam Alert: Multiple identical messages offering a $50 steam gift card were posted, potentially indicating spam activity. The accompanying link was steamcommunity.com/gift/758474483.
Links mentioned:
- How To Control Your Chatbot Actions and Prompt System: LangGraph: #langgraph #langchain #ai #chatbot #python #automation if you are keeping up with Agent in Langchain, you know there are many ways to build an agent, but in ...
- Mr. Beast Meets Mistral: AI Created a Cookbook Based on His Wildest Stunts!: Today we create Beast CookbookThe "Beast Cookbook" idea is a fun and creative way to engage with Mr. Beast's content and generate an entertaining, fictional ...
LLM Perf Enthusiasts AI â· #general (21 messagesđ„):
-
Trouble in Real Estate AI-Land: A member from Uniti AI is struggling with GPT4.Turbo to accurately match property inventory based upon user requirements, mentioning issues such as suggesting a property of 17,000 square feet when the request was for 2,000 - 4,000.
-
LLMâs Role in Filtering: The memberâs current approach involves using LLM to match properties from a CSV file with specified criteria. The detailed prompt aims to ensure that inventory suggestions remain within the specified requirements, with a variance up to +/- 20%.
-
Simple Solutions Over Complication: Another member suggested using a simple database filter instead of an LLM, pointing out that an LLM can generate the query but is not necessary for the actual filtering process.
-
The Common LLM Trap Avoided: In response to feeling unintelligent for the oversight, the member seeking help was reassured that falling into the âcommon LLM trapâ happens and isnât a reflection of their capabilities.
-
Useful Resources Linked: They provided a link to an instructional blog post by Jason Liu: âRAG is more than just embedding searchâ that discusses the limitations of embedding search and the applicability of LLMs in generating queries and handling natural language interactions.
Link mentioned: RAG is more than just embedding search - Instructor: no description found
LLM Perf Enthusiasts AI â· #claude (5 messages):
- Chafing Under Anthropicâs Rate Limits: One member expressed frustration about Anthropicâs strict rate limits, citing a 200k context window but only allowing 1M tokens per day on the API.
- Looking for Bedrockâs Financial Ease: The same member inquired about Bedrockâs monthly fee model for guaranteed throughput, seeking insights from anyone with experience using the service.
- Anthropicâs Scale Plan Provides Relief: Another member suggested contacting Anthropicâs sales team for access to their âscaleâ plan, noting a reasonable monthly spend of $500 for what was referred to as a relatively low cost.
LLM Perf Enthusiasts AI â· #resources (3 messages):
- Hunt for the Ultimate Guides: A user is compiling a resource guide and is seeking the communityâs favorite explainer resources on advanced topics related to Large Language Models (LLMs).
- Exa.ai Endorsement: In response to the call for resources, exa.ai was suggested as a useful tool for exploring LLM-adjacent topics.
- Clarification on Resource Depth: The user clarified their request by saying they are searching for the best, most clear explainers on topics like RHLF, beyond just a compilation of numerous blog posts or articles.
LLM Perf Enthusiasts AI â· #jobs (1 messages):
ibash: > write high quality code Damn.
LLM Perf Enthusiasts AI â· #openai (1 messages):
- GPT-3.5-0125 Outshines Its Predecessors: A member highlighted that GPT-3.5-0125 significantly outperforms previous models in all their tests, marking it as a distinctly superior iteration.
LLM Perf Enthusiasts AI â· #prompting (1 messages):
emrgnt_cmplxty: Basic prompting isnât getting it done for you?
Alignment Lab AI â· #looking-for-collabs (1 messages):
-
Volunteers Needed for Groundbreaking ML Research Project: The Youth Inquiry Network and Futracode are collaborating to develop a machine learning algorithm that will recommend the best research topics by utilizing existing research databases. They are seeking web developers, data analysts, and AI & ML experts to contribute to this ambitious endeavor.
-
Contribute for Recognition and Experience: Volunteers will have the chance to boost their portfolios and receive a certification, two professional recommendation letters, and the source code from the project. The engagement promises flexible scheduling and does not require a time-intensive commitment.
-
Work Now, Own Later: Participants in this non-profit initiative will retain full rights to the developed ML algorithm, including the freedom to display, promote, sell, or use it however they see fit after the projectâs completion.
-
No Red Tape to Get Involved: Interested individuals can directly express their interest without the need for a formal applicationâsimply by messaging the recruiter or by commentating âinterestedâ to be contacted for further steps.
Alignment Lab AI â· #general-chat (8 messagesđ„):
- Brief Communication Interchange: The phrase life lesson prompted a humorous empathetic response from another member, indicating a shared communal understanding or incident.
- Link to Educational Document: A member shares a potentially informative Google Docs link regarding Post-AGI Educational Reforms; however, no further details or context is provided.
- Query for DPO-P Training Code: A member inquires if anyone possesses a code for DPO-P training, with no further elaboration on what DPO-P entails or the application of such code.
- Moment of Self-Awareness: In a playful turn of self-realization, a member recognizes their own mod status after initially calling for moderation.
- Recruiting Volunteers for ML Project Collaboration: A call-to-arms is issued for coders, data analysts, AI, and ML specialists to volunteer on a machine learning project aimed at suggesting research topics, with incentives like certifications, recommendation letters, and the freedom to use the resulting code. Interested individuals are invited to directly message the initiator with the word âinterestedâ.
Link mentioned: Post-AGI Educational Reforms : no description found
Alignment Lab AI â· #looking-for-workers (1 messages):
-
Non-Profits Seek Tech Talent for Groundbreaking ML Project: The âYouth Inquiry Networkâ and âFutracodeâ are collaborating to create a Machine Learning algorithm that suggests the best research topics by training on research databases. Theyâre looking for web developers, data analysts, AI & ML specialists to join this endeavor.
-
Volunteer Work with Tangible Perks: The project is a volunteer effort aimed to benefit students struggling to find research topics. Contributors will receive a boost to their portfolio, certification, and recommendation letters from the founders of the two non-profits.
-
Contribute Code for a Cause: Volunteers will retain the code source for personal growth, experience enhancement, and even the freedom to sell their contributions following project completion. The work schedule is flexible, tailored to fit the contributorsâ availability.
-
No-Strings-Attached Application Process: Interested individuals can join the project by directly messaging or commenting âinterested,â with no formal application form required.
Datasette - LLM (@SimonW) â· #llm (5 messages):
- Clarifying
llm
andollama
Difference: A member explained thatllm
interfaces with models but doesnât execute them likeollama
.llm
can be set up to use API endpoints served byollama
, which executes models locally and makes them available as local HTTP API endpoints. - Understanding Mistral Model Execution: Inquiring about Mistral model execution, a member received clarification that using the Mistral model with
llm
implies itâs running locally, but through the HTTP API endpoints provided byollama
or thellm-llama-cpp
plugin that can run local models without HTTP. - Appreciation for AI-Powered Git Commit Helper: A member shared their continuous use of AICommits (GitHub - Nutlope/aicommits), a tool for writing git commit messages with AI assistance, while expressing a desire for features like emoji standards for commits.
Link mentioned: GitHub - Nutlope/aicommits: A CLI that writes your git commit messages for you with AI: A CLI that writes your git commit messages for you with AI - Nutlope/aicommits
Skunkworks AI â· #off-topic (2 messages):
- AI Gets Culinary: A member shared a YouTube video titled âMr. Beast Meets Mistral: AI Created a Cookbook Based on His Wildest Stunts!â. The video discusses how an AI created a cookbook inspired by the stunts of YouTuber Mr. Beast.
- Seeking German DL/AI Content: A member asked the group for recommendations on deep learning/AI podcasts or video series in German. They expressed an interest in engaging with content in said language.
Link mentioned: Mr. Beast Meets Mistral: AI Created a Cookbook Based on His Wildest Stunts!: Today we create Beast CookbookThe âBeast Cookbookâ idea is a fun and creative way to engage with Mr. Beastâs content and generate an entertaining, fictional âŠ