All tags
Company: "pytorch"
not much happened today
nemotron-nano-2 gpt-oss-120b qwen3 llama-3 minimax-m2 glm-4.6-air gemini-2.5-flash gpt-5.1-mini tahoe-x1 vllm_project nvidia mistral-ai baseten huggingface thinking-machines deeplearningai pytorch arena yupp-ai zhipu-ai scaling01 stanford transformer-architecture model-optimization inference distributed-training multi-gpu-support performance-optimization agents observability model-evaluation reinforcement-learning model-provenance statistical-testing foundation-models cancer-biology model-fine-tuning swyx dvilasuero _lewtun clementdelangue zephyr_z9 skylermiao7 teortaxestex nalidoust
vLLM announced support for NVIDIA Nemotron Nano 2, featuring a hybrid Transformer–Mamba design and tunable "thinking budget" enabling up to 6× faster token generation. Mistral AI Studio launched a production platform for agents with deep observability. Baseten reported high throughput (650 TPS) for GPT-OSS 120B on NVIDIA hardware. Hugging Face InspectAI added inference provider integration for cross-provider evaluation. Thinking Machines Tinker abstracts distributed fine-tuning for open-weight LLMs like Qwen3 and Llama 3. In China, MiniMax M2 shows competitive performance with top models and is optimized for agents and coding, while Zhipu GLM-4.6-Air focuses on reliability and scaling for coding tasks. Rumors suggest Gemini 2.5 Flash may be a >500B parameter MoE model, and a possible GPT-5.1 mini reference appeared. Outside LLMs, Tahoe-x1 (3B) foundation model achieved SOTA in cancer cell biology benchmarks. Research from Stanford introduces a method to detect model provenance via training-order "palimpsest" with strong statistical guarantees.
not much happened today
vllm chatgpt-atlas langchain meta microsoft openai pytorch ray claude agent-frameworks reinforcement-learning distributed-computing inference-correctness serving-infrastructure browser-agents security middleware runtime-systems documentation hwchase17 soumithchintala masondrxy robertnishihara cryps1s yuchenj_uw
LangChain & LangGraph 1.0 released with major updates for reliable, controllable agents and unified docs, emphasizing "Agent Engineering." Meta introduced PyTorch Monarch and TorchForge for distributed programming and reinforcement learning, enabling large-scale agentic systems. Microsoft Learn MCP server now integrates with tools like Claude Code and VS Code for instant doc querying, accelerating grounded agent workflows. vLLM improved inference correctness with token ID returns and batch-invariant inference, collaborating with Ray for orchestration in PyTorch Foundation. OpenAI launched ChatGPT Atlas, a browser agent with contextual Q&A and advanced safety features, though early users note maturity challenges and caution around credential access.
not much happened today
gpt-5 gpt4-0314 qwen3-235b-thinking runway-aleph imagen-4-ultra smollm3 grok-4 openai alibaba runway hugging-face google anthropic pytorch lmarena reinforcement-learning reasoning video-generation image-generation model-optimization open-source model-performance inference-speed integration stability sama clementdelangue xikun_zhang_ teknnium1 chujiezheng
OpenAI has fully rolled out its ChatGPT agent to all Plus, Pro, and Team users and is building hype for the upcoming GPT-5, which reportedly outperforms Grok-4 and can build a cookie clicker game in two minutes. Alibaba's Qwen team released the open-source reasoning model Qwen3-235B-Thinking, achieving an 89% win rate over gpt4-0314 using a new RL algorithm called Group Sequence Policy Optimization (GSPO). Runway introduced Runway Aleph, a state-of-the-art in-context video model for editing and generating video content. Hugging Face highlights the growing momentum of open-source AI, especially from Chinese teams. Other updates include Kling's upgrades for image-to-video generation and Google's Imagen 4 Ultra being recognized as a top text-to-image model. Anthropic integrated Claude with Canva for branded visual designs but faces stability issues. The PyTorch team released optimized checkpoints for SmolLM3 to speed up inference.
Too Cheap To Meter: AI prices cut 50-70% in last 30 days
gpt-4o gpt-4o-mini llama-3-1-405b mistral-large-2 gemini-1.5-flash deepseek-v2 sonnet-3.5 exaone-3.0 minicpm-v-2.6 claude-3.5 gpt-4o-2024-08-06 llamaindex together-ai deepinfra deepseek-ai mistral-ai google-deepmind lg-ai-research llamaindex llamaindex llamaindex price-cuts context-caching instruction-tuning vision benchmarks pytorch attention-mechanisms reinforcement-learning-from-human-feedback compute-optimal-scaling rohanpaul_ai akhaliq mervenoyann sophiamyang chhillee karpathy
Gemini 1.5 Flash has cut prices by approximately 70%, offering a highly competitive free tier of 1 million tokens per minute at $0.075/mtok, intensifying the AI model price war. Other significant price reductions include GPT-4o (~50% cut to $2.50/mtok), GPT-4o mini (70-98.5% cut to $0.15/mtok), Llama 3.1 405b (46% cut to $2.7/mtok), and Mistral Large 2 (62% cut to $3/mtok). Deepseek v2 introduced context caching, reducing input token costs by up to 90% to $0.014/mtok. New model releases include Llama 3.1 405b, Sonnet 3.5, EXAONE-3.0 (7.8B instruction-tuned by LG AI Research), and MiniCPM V 2.6 (vision-language model combining SigLIP 400M and Qwen2-7B). Benchmarks show Mistral Large performing well on ZebraLogic and Claude-3.5 leading LiveBench. FlexAttention, a new PyTorch API, simplifies and optimizes attention mechanisms. Andrej Karpathy analyzed RLHF, highlighting its limitations compared to traditional reinforcement learning. Google DeepMind research on compute-optimal scaling was also summarized.
Ring Attention for >1M Context
gemini-pro gemma-7b gemma-2b deepseek-coder-6.7b-instruct llama-cpp google cuda-mode nvidia polymind deepseek ollama runpod lmstudio long-context ringattention pytorch cuda llm-guessing-game chatbots retrieval-augmented-generation vram-optimization fine-tuning dynamic-prompt-optimization ml-workflows gpu-scaling model-updates liu zaharia abbeel
Google Gemini Pro has sparked renewed interest in long context capabilities. The CUDA MODE Discord is actively working on implementing the RingAttention paper by Liu, Zaharia, and Abbeel, including extensions from the World Model RingAttention paper, with available PyTorch and CUDA implementations. TheBloke Discord discussed various topics including LLM guessing game evaluation, chatbot UX comparisons between Nvidia's Chat with RTX and Polymind, challenges in retrieval-augmented generation (RAG) integration, VRAM optimization, fine-tuning for character roleplay using Dynamic Prompt Optimization (DPO), and model choices like deepseek-coder-6.7B-instruct. There was also discussion on ML workflows on Mac Studio, with preferences for llama.cpp over ollama, and scaling inference cost-effectively using GPUs like the 4090 on Runpod. LM Studio users face manual update requirements for version 0.2.16, which includes support for Gemma models and bug fixes, especially for MacOS. The Gemma 7B model has had performance issues, while Gemma 2B received positive feedback.
1/3/2024: RIP Coqui
sdxl diffusers-0.25 coqui mozilla hugging-face google text-to-speech performance-optimization token-management transformer-architecture image-datasets web-crawling pytorch leaderboards
Coqui, a prominent open source text-to-speech project from the Mozilla ML group, officially shut down. Discussions in the HuggingFace Discord highlighted skepticism about the claimed
3X faster speed of sdxl, attributing improvements more to techniques like torch.compile and removal of fp16 and attention rather than diffusers 0.25 features. Users confirmed that a HuggingFace user token can be used across multiple machines, though distinct tokens are recommended for safety. The Learning Loss Minimization (LLM) Leaderboard briefly experienced issues but was later confirmed operational. A Kaggle notebook was shared demonstrating how to build Transformer architectures from scratch using PyTorch. Additionally, a new image dataset with 15k shoe, sandal, and boot images was introduced for multiclass classification tasks. Explanations about the workings of the Common Crawl web-crawling process were also shared.