All tags
Topic: "hardware-configuration"
12/27/2023: NYT vs OpenAI
phi2 openhermes-2.5-mistral-7b llama-2-7b llama-2-13b microsoft-research mistral-ai apple amd model-performance fine-tuning llm-api gpu-optimization hardware-configuration multi-gpu inference-speed plugin-release conversation-history
The LM Studio Discord community extensively discussed model performance comparisons, notably between Phi2 by Microsoft Research and OpenHermes 2.5 Mistral 7b, with focus on U.S. history knowledge and fine-tuning for improved accuracy. Technical challenges around LLM API usage, conversation history maintenance, and GPU optimization for inference speed were addressed. Hardware discussions covered DDR4 vs DDR5, multi-GPU setups, and potential of Apple M1/M3 and AMD AI CPUs for AI workloads. The community also announced the ChromaDB Plugin v3.0.2 release enabling image search in vector databases. Users shared practical tips on running multiple LM Studio instances and optimizing resource usage.
12/26/2023: not much happened today
llava exllama2 meta-ai-fair google-deepmind gpu-offloading vram-utilization model-conversion moe-models multimodality model-performance hardware-configuration model-saving chatml installation-issues music-generation
LM Studio users extensively discussed its performance, installation issues on macOS, and upcoming features like Exllama2 support and multimodality with the Llava model. Conversations covered GPU offloading, vRAM utilization, MoE model expert selection, and model conversion compatibility. The community also addressed inefficient help requests referencing the blog 'Don't Ask to Ask, Just Ask'. Technical challenges with ChromaDB Plugin, server vs desktop hardware performance, and saving model states with Autogen were highlighted. Discussions included comparisons with other chatbots and mentions of AudioCraft from meta-ai-fair and MusicLM from google-deepmind for music generation.