All tags
Model: "smoldocling"
Cohere's Command A claims #3 open model spot (after DeepSeek and Gemma)
command-a mistral-ai-small-3.1 smoldocling qwen-2.5-vl cohere mistral-ai hugging-face context-windows multilinguality multimodality fine-tuning benchmarking ocr model-performance model-releases model-optimization aidangomez sophiamyang mervenoyann aidan_mclau reach_vb lateinteraction
Cohere's Command A model has solidified its position on the LMArena leaderboard, featuring an open-weight 111B parameter model with an unusually long 256K context window and competitive pricing. Mistral AI released the lightweight, multilingual, and multimodal Mistral AI Small 3.1 model, optimized for single RTX 4090 or Mac 32GB RAM setups, with strong performance on instruct and multimodal benchmarks. The new OCR model SmolDocling offers fast document reading with low VRAM usage, outperforming larger models like Qwen2.5VL. Discussions highlight the importance of system-level improvements over raw LLM advancements, and MCBench is recommended as a superior AI benchmark for evaluating model capabilities across code, aesthetics, and awareness.