All tags
Model: "tinyllama-1.1b"
12/29/2023: TinyLlama on the way
tinyllama-1.1b openai hugging-face gpu-optimization model-deployment discord-bots embedding-models inference-server hardware-compatibility model-performance beta-testing autogen context-window
The Nous/Axolotl community is pretraining a 1.1B model on 3 trillion tokens, showing promising results on HellaSwag for a small 1B model. The LM Studio Discord discussions cover extensive GPU-related issues, Discord bot integration with the OpenAI API, and hardware limitations affecting model usage. Community members also discuss server hosting for embeddings and LLMs, propose updates for Discord channels to improve model development collaboration, and address a gibberish problem in beta releases. The Autogen tool's installation and operational challenges are also clarified by users.
12/28/2023: Smol Talk updates
tinyllama-1.1b mixtral tinygpt-v nous-research tyrannosaurus latex benchmarking knowledge-graphs model-finetuning tokenization decentralized-computation philosophy-of-ai multimodality vision open-source-models gary-marcus
Nous Research AI Discord discussions covered topics such as AI placement charts, ChatGPT's issues with Latex math format compatibility with Obsidian, and performance metrics of the TinyLlama 1.1B model on various benchmarks. Users shared resources including the math-centric corpus MathPile, knowledge graph building methods, and open-source large language model repositories. Technical discussions included decentralized computation feasibility for models like Mixtral, philosophical debates on AI sentience, and strategies for model finetuning and token counting. The community also discussed the Obsidian model, vision model training, and the release of the multimodal TinyGPT-V model by Tyrannosaurus. "ChatGPT not generating Latex math format compatible with Obsidian" and "optimistic about human-level AI within our lifetime" were notable quotes.