All tags
Model: "amazon-titan-text-lite"
12/30/2023: Mega List of all LLMs
deita-v1.0 mixtral amazon-titan-text-express amazon-titan-text-lite nous-research hugging-face amazon mistral-ai local-attention computational-complexity benchmarking model-merging graded-modal-types function-calling data-contamination training-methods stella-biderman euclaise joey00072
Stella Biderman's tracking list of LLMs is highlighted, with resources shared for browsing. The Nous Research AI Discord discussed the Local Attention Flax module focusing on computational complexity, debating linear vs quadratic complexity and proposing chunking as a solution. Benchmark logs for various LLMs including Deita v1.0 with its SFT+DPO training method were shared. Discussions covered model merging, graded modal types, function calling in AI models, and data contamination issues in Mixtral. Community insights were sought on Amazon Titan Text Express and Amazon Titan Text Lite LLMs, including a unique training strategy involving bad datasets. Several GitHub repositories and projects like DRUGS, MathPile, CL-FoMo, and SplaTAM were referenced for performance and data quality evaluations.