All tags
Topic: "model-memory"
not much happened today
llama-2-70b llama-2-7b mistral-7b qwen-1.5 llava microsoft mistral-ai ollama fine-tuning synthetic-data retrieval-augmented-generation embeddings hardware-optimization performance-benchmarks model-memory multimodality
The Reddit community /r/LocalLlama discusses fine-tuning and training LLMs, including tutorials and questions on training models with specific data like dictionaries and synthetic datasets with 25B+ tokens. Users explore retrieval-augmented generation (RAG) challenges with models like mistral-7b and embedding generation for EEG brain activity. Discussions include hardware optimization for running llama-2-70b locally under budget constraints, and performance benchmarks for qwen-1.5 models. There is interest in extending LLM capabilities, such as converting llama-2-7b into a vision-capable model like llava and improving model memory for longer context retention.