All tags
Model: "deepseek-v3.1"
not much happened today
qwen3-vl-4b qwen3-vl-8b qwen2.5-vl-72b deepseek-v3.1 alibaba arena runway nvidia togethercompute ollama model-optimization fine-tuning inference-speed video-generation diffusion-models representation-learning local-ai speculative-decoding fp8-quantization context-windows karpathy
Alibaba released compact dense Qwen3-VL models at 4B and 8B sizes with FP8 options, supporting up to 1M context and open vocabulary detection, rivaling larger models like Qwen2.5-VL-72B. Ecosystem support includes MLX-VLM, LM Studio, vLLM, Kaggle models, and Ollama Cloud. In video AI, Arena added Sora 2 models leading in video benchmarks, with Higgsfield Enhancer improving video quality. Runway launched domain-specific workflow apps for creative tasks. Research on Representation Autoencoders for DiTs (RAE-DiT) shows improved diffusion model performance. On local training, NVIDIA DGX Spark enables strong local fine-tuning, while Nanochat by Karpathy offers a minimal stack for training and inference. Together AI introduced ATLAS, a speculative decoding method achieving up to 4× faster inference on DeepSeek-V3.1. These developments highlight advances in efficient model deployment, video AI, local fine-tuning, and inference speed optimization.
not much happened today
gpt-5-pro gemini-2.5 vllm deepseek-v3.1 openai google-deepmind microsoft epoch-ai-research togethercompute nvidia mila reasoning reinforcement-learning inference speculative-decoding sparse-attention kv-cache-management throughput-optimization compute-efficiency tokenization epochairesearch yitayml _philschmid jiqizhixin cvenhoff00 neelnanda5 lateinteraction mgoin_ blackhc teortaxestex
FrontierMath Tier 4 results show GPT-5 Pro narrowly outperforming Gemini 2.5 Deep Think in reasoning accuracy, with concerns about problem leakage clarified by Epoch AI Research. Mila and Microsoft propose Markovian Thinking to improve reasoning efficiency, enabling models to reason over 24K tokens with less compute. New research suggests base models inherently contain reasoning mechanisms, with "thinking models" learning to invoke them effectively. In systems, NVIDIA Blackwell combined with vLLM wins InferenceMAX with significant throughput gains, while Together AI's ATLAS adaptive speculative decoding achieves 4× speed improvements and reduces RL training time by over 60%. SparseServe introduces dynamic sparse attention with KV tiering, drastically improving throughput and latency in GPU memory management.
NVIDIA to invest $100B in OpenAI for 10GW of Vera Rubin rollout
qwen3-omni deepseek-v3.1 nvidia openai oracle intel enfabrica wayne gpu-infrastructure deterministic-inference reinforcement-learning fp8-precision gpu-performance ai-infrastructure strategic-partnerships investment datacenters cuda-graphs pipeline-parallelism data-parallelism artificialanlys gdb
NVIDIA and OpenAI announced a landmark strategic partnership to deploy at least 10 gigawatts of AI datacenters using NVIDIA's systems, with NVIDIA investing up to $100 billion progressively as each gigawatt is deployed, starting in the second half of 2026 on the Vera Rubin platform. This deal significantly impacts the AI infrastructure funding landscape, potentially supporting OpenAI's $300 billion commitment to Oracle. The announcement caused major stock market reactions, with NVIDIA's market cap surging by $170 billion. Additionally, advancements in deterministic inference for reinforcement learning and FP8 precision gains in GPU performance were highlighted by AI practitioners.
nano-banana is Gemini‑2.5‑Flash‑Image, beating Flux Kontext by 170 Elo with SOTA Consistency, Editing, and Multi-Image Fusion
gemini-2.5-flash-image-preview hermes-4 nemotron-nano-9b-v2 internvl3.5 gpt-oss qwen3 deepseek-v3.1 google-deepmind nous-research nvidia openai ollama huggingface openrouter image-editing natural-language-processing multi-image-composition character-consistency reasoning hybrid-models context-windows model-steerability pretraining finetuning alignment vision vision-language api model-integration sundarpichai _philschmid lmarena_ai omarsar0 skirano yupp_ai xanderatallah officiallogank mervenoyann
Google DeepMind revealed Gemini-2.5-Flash-Image-Preview, a state-of-the-art image editing model excelling in character consistency, natural-language edits, and multi-image composition, dominating the Image Edit Arena with a ~170-180 Elo lead and over 2.5M votes. It is integrated into multiple platforms including Google AI Studio and third-party services. Nous Research released Hermes 4, an open-weight hybrid reasoning model focused on steerability and STEM benchmarks. NVIDIA launched Nemotron Nano 9B V2, a hybrid Mamba-Transformer with 128k context, top-performing under 10B parameters, and released a 6.6T-token pretraining subset. InternVL3.5 introduced 32 vision-language models based on OpenAI's gpt-oss and Qwen3 backbones. Ollama v0.11.7 added DeepSeek v3.1 support with hybrid thinking and Turbo mode preview.
not much happened today
qwen-image-edit qwen-vl-max kling-2.1 veo-3 deepseek-v3.1 genie-3 sima google-deepmind alibaba google deepseek baseten yupp multimodality embodied-ai simulation fine-tuning quantization video-generation image-generation local-inference scaling agent-training real-time-control spatial-memory demishassabis bonniesjli shreyar ostrisai lmarena_ai teortaxestex ivanfioravanti
DeepMind released Genie 3, an interactive multimodal world simulator with advanced spatial memory and real-time avatar control, and SIMA, an embodied training agent operating inside generated worlds. Alibaba introduced Qwen-Image-Edit, an open-weights image editor scoring ELO 1098 (#2) in the Image Editing Arena, running on Qualcomm NPUs, alongside Qwen-VL-Max entering the Vision top-20. Video models like Kling 2.1 showed a 235% improvement in frame control, with new entrants Luma Ray 2 and Runway Gen-4 Turbo debuting. Google provided free Veo 3 generations in Gemini App and enhanced Google Photos with natural-language edits. DeepSeek v3.1 launched with focus on SWE and Search agents, supporting local inference on Apple Silicon with 4-bit quantization achieving ~21 tok/s on M3 Ultra. The news highlights advances in interactive simulation, vision editing, video synthesis, and scalable local AI inference.
Cohere Command A Reasoning beats GPT-OSS-120B and DeepSeek R1 0528
command-a-reasoning deepseek-v3.1 cohere deepseek intel huggingface baseten vllm-project chutes-ai anycoder agentic-ai hybrid-models long-context fp8-training mixture-of-experts benchmarking quantization reasoning coding-workflows model-pricing artificialanlys reach_vb scaling01 cline ben_burtenshaw haihaoshen jon_durbin _akhaliq willccbb teortaxestex
Cohere's Command A Reasoning model outperforms GPT-OSS in open deep research capabilities, emphasizing agentic use cases for 2025. DeepSeek-V3.1 introduces a hybrid reasoning architecture toggling between reasoning and non-reasoning modes, optimized for agentic workflows and coding, with extensive long-context pretraining (~630B tokens for 32k context, ~209B for 128k), FP8 training, and a large MoE expert count (~37B). Benchmarks show competitive performance with notable improvements in SWE-Bench and other reasoning tasks. The model supports a $0.56/M input and $1.68/M output pricing on the DeepSeek API and enjoys rapid ecosystem integration including HF weights, INT4 quantization by Intel, and vLLM reasoning toggles. Community feedback highlights the hybrid design's pragmatic approach to agent and software engineering workflows, though some note the lack of tool use in reasoning mode.
DeepSeek V3.1: 840B token continued pretrain, beating Claude 4 Sonnet at 11% of its cost
deepseek-v3.1 seed-oss-36b computerrl gemini-2.5-pro gpt-5 claude-code gpt-oss-120b gpt-oss-20b deepseek bytedance zhipu-ai github microsoft anthropic together-ai baseten huggingface token-efficiency coding agentic-benchmarks long-context reinforcement-learning developer-tools fine-tuning multinode-training model-release teortaxestex rasbt lukehoban burkeholland _catwu cline winglian
DeepSeek released DeepSeek V3.1, a quietly rolled out open model with an 128K context window and improvements in token efficiency, coding, and agentic benchmarks. ByteDance launched the permissive Seed-OSS 36B model on Hugging Face, noted for long-context and reasoning capabilities. Zhipu AI introduced ComputerRL, a reinforcement learning framework for computer-use agents, achieving strong benchmark results. In developer tooling, GitHub Copilot expanded globally, Microsoft VS Code integrated Gemini 2.5 Pro and updated GPT-5 agent prompts, and Anthropic launched Claude Code seats with spend controls. Open-source fine-tuning advances include Together AI adding SFT for gpt-oss-120B/20B and Baseten enabling multinode 120B training with Truss CLI. The community noted mixed performance and ongoing post-training adjustments for DeepSeek V3.1.