All tags
Person: "pradeep1148"
12/25/2023: Nous Hermes 2 Yi 34B for Christmas
nous-hermes-2 yi-34b nucleusx yayi-2 ferret teknim nous-research apple mixtral deepseek qwen huggingface wenge-technology quantization model-optimization throughput-metrics batch-processing parallel-decoding tensor-parallelization multimodality language-model-pretraining model-benchmarking teknium carsonpoole casper_ai pradeep1148 osanseviero metaldragon01
Teknium released Nous Hermes 2 on Yi 34B, positioning it as a top open model compared to Mixtral, DeepSeek, and Qwen. Apple introduced Ferret, a new open-source multimodal LLM. Discussions in the Nous Research AI Discord focused on AI model optimization and quantization techniques like AWQ, GPTQ, and AutoAWQ, with insights on proprietary optimization and throughput metrics. Additional highlights include the addition of NucleusX Model to transformers, a 30B model with 80 MMLU, and the YAYI 2 language model by Wenge Technology trained on 2.65 trillion tokens. "AutoAWQ outperforms vLLM up to batch size 8" was noted, and proprietary parallel decoding and tensor parallelization across GPUs were discussed for speed improvements.
12/10/2023: not much happened today
mixtral-8x7b-32kseqlen mistral-7b stablelm-zephyr-3b openhermes-2.5-neural-chat-v3-3-slerp gpt-3.5 gpt-4 nous-research openai mistral-ai hugging-face ollama lm-studio fine-tuning mixture-of-experts model-benchmarking inference-optimization model-evaluation open-source decentralized-ai gpu-optimization community-engagement andrej-karpathy yann-lecun richard-blythman gabriel-syme pradeep1148 cyborg_1552
Nous Research AI Discord community discussed attending NeurIPS and organizing future AI events in Australia. Highlights include interest in open-source and decentralized AI projects, with Richard Blythman seeking co-founders. Users shared projects like Photo GPT AI and introduced StableLM Zephyr 3B. The Mixtral model, based on Mistral, sparked debate on performance and GPU requirements, with comparisons to GPT-3.5 and potential competitiveness with GPT-4 after fine-tuning. Tools like Tensorboard, Wandb, and Llamahub were noted for fine-tuning and evaluation. Discussions covered Mixture of Experts (MoE) architectures, fine-tuning with limited data, and inference optimization strategies for ChatGPT. Memes and community interactions referenced AI figures like Andrej Karpathy and Yann LeCun. The community also shared resources such as GitHub links and YouTube videos related to these models and tools.