All tags
Person: "sam_paech"
not much happened today
glm-4.7-flash grok deepseek-r1 qwq x-ai unsloth-ai google deepseek ollama transformer-architecture recommendation-systems local-inference kv-cache quantization tensor-parallelism reasoning model-optimization fine-tuning giffmana david_sholz yuchenj_uw nearcyan sam_paech teortaxes_tex danielhanchen alexocheema nopmobiel rohanpaul_ai
X Engineering open-sourced its new transformer-based recommender algorithm, sparking community debate on transparency and fairness. GLM-4.7-Flash (30B-A3B) gains momentum as a strong local inference model with efficient KV-cache management and quantization tuning strategies. Innovations include tensor parallelism on Mac Minis achieving ~100 tok/s throughput. Research highlights "Societies of Thought" as a reasoning mechanism improving model accuracy by 20%+.