All tags
Person: "alexocheema"
not much happened today
glm-4.7-flash grok deepseek-r1 qwq x-ai unsloth-ai google deepseek ollama transformer-architecture recommendation-systems local-inference kv-cache quantization tensor-parallelism reasoning model-optimization fine-tuning giffmana david_sholz yuchenj_uw nearcyan sam_paech teortaxes_tex danielhanchen alexocheema nopmobiel rohanpaul_ai
X Engineering open-sourced its new transformer-based recommender algorithm, sparking community debate on transparency and fairness. GLM-4.7-Flash (30B-A3B) gains momentum as a strong local inference model with efficient KV-cache management and quantization tuning strategies. Innovations include tensor parallelism on Mac Minis achieving ~100 tok/s throughput. Research highlights "Societies of Thought" as a reasoning mechanism improving model accuracy by 20%+.
not much happened today
minimax-m2.1 glm-4.7 gemini-3-pro claude-3-sonnet vl-jepa minimax-ai vllm-project exolabs mlx apple openai open-source mixture-of-experts local-inference quantization inference-quality multimodality non-autoregressive-models video-processing reinforcement-learning self-play agentic-rl parallel-computing model-deployment ylecun awnihannun alexocheema edwardsun0909 johannes_hage
MiniMax M2.1 launches as an open-source agent and coding Mixture-of-Experts (MoE) model with ~10B active / ~230B total parameters, claiming to outperform Gemini 3 Pro and Claude Sonnet 4.5, and supports local inference including on Apple Silicon M3 Ultra with quantization. GLM 4.7 demonstrates local scaling on Mac Studios with 2× 512GB M3 Ultra hardware, highlighting system-level challenges like bandwidth and parallelism. The concept of inference quality is emphasized as a key factor affecting output variance across deployments. Yann LeCun's VL-JEPA proposes a non-generative, non-autoregressive multimodal model operating in latent space for efficient real-time video processing with fewer parameters and decoding operations. Advances in agentic reinforcement learning for coding include self-play methods where agents inject and fix bugs autonomously, enabling self-improvement without human labeling, and large-scale RL infrastructure involving massive parallel code generation and execution sandboxes.