All tags
Topic: "coding-agents"
not much happened today
kimi-linear-48b codex gpt-5.4 claude-code moonshot openai assemblyai langchain attention-mechanisms model-architecture inference-speed agent-feedback agent-skills multi-agent-systems knowledge-transfer cli-tools coding-agents model-deployment kimi_moonshot elonmusk yuchenj_uw nathancgy4 eliebakouch tokenbender behrouz_ali cloneofsimo fidjissimo sama gdb andrewyng itsafiz simplifyinai
Moonshot's Attention Residuals paper introduced an input-dependent attention mechanism over prior layers with a 1.25x compute advantage and less than 2% inference latency overhead, validated on Kimi Linear 48B total / 3B active. The paper sparked debate on novelty versus prior art like DeepCrossAttention and Google’s earlier work, highlighting tensions in idea novelty, citation quality, and frontier-scale validation. OpenAI's Codex showed strong momentum with over 2M weekly active users, nearly 4x growth YTD, and GPT-5.4 hitting 5T tokens/day and a $1B annualized run-rate. Codex added subagents supporting multi-agent coding workflows. Infrastructure for coding agents matured with tools like Context Hub / chub supporting agent feedback loops, AssemblyAI's skill for Claude Code and Codex, and automated skill extraction from GitHub repos yielding 40% knowledge-transfer gains. LangChain launched LangGraph CLI and open-sourced Deep Agents, recreating top coding agent workflows with planning, filesystem ops, shell access, and sub-agents.
not much happened today
gpt-5.4 openai anthropic uber nous-research cursor_ai redisinc artificialanlys langchain-js agent-infrastructure mcp-protocol harnesses coding-agents evaluation-methodologies agent-ui-ux runtime-environments multi-axis-evaluation automation workflow-optimization open-agent-platforms provider-integration filesystem-checkpoints mattturck hwchase17 omarsar0 gergelyorosz htihle theprimeagen sydneyrunkle corbtt
Harnesses, agent infrastructure, and the MCP protocol are central themes, with emphasis on how harnesses, sandboxes, filesystem access, skills, memory, and observability shape agent UI/UX and runtime environments. Despite jokes about MCP's demise, it remains vital in production, notably used internally by Uber and supported by Anthropic. The coding-agent stack is evolving with CursorBench combining offline and online metrics to evaluate models on intelligence and efficiency, where GPT-5.4 leads in correctness and token efficiency. Agent-assisted development is splitting between automation-heavy workflows and "stay-in-the-loop" tooling, with OpenAI advancing Codex Automations featuring worktree vs. branch choices and UI customization. The open agent platform Hermes Agent v0.2.0 introduces full MCP client support, ACP server for editors, and expanded provider integrations including OpenAI OAuth.
Autoresearch: Sparks of Recursive Self Improvement
claude-3 codex anthropic openai cognition automated-machine-learning coding-agents bug-fixing model-autonomy multi-agent-systems pr-review systems-engineering model-verification karpathy yi_tay jakub_pachocki
RSI covers AI developments from 3/5/2026 to 3/9/2026, highlighting the emergence of LLMs autonomously training smaller LLMs, marking a significant "AutoML moment" in AI progress. Karpathy and Yi Tay discuss "vibe training," where AI models fix bugs and improve code autonomously, suggesting models may soon surpass human debugging efficiency. The report anticipates Jakub Pachocki's Automated AI Research Intern system by September 2026 to accelerate human researchers. On AI Twitter, the focus is on coding agents shifting bottlenecks from implementation to review and verification, with Anthropic's Claude Code Review improving PR review effectiveness significantly, and tools like OpenAI Codex Review and Cognition's Devin Review enhancing code review workflows. Harness engineering is evolving into systems engineering, emphasizing decoupling agent storage from compute for collaborative agent teams.
not much happened today
qwen-3.5-0.8b qwen-3.5-2b qwen-3.5-4b qwen-3.5-9b codex-5.3 claude-3 alibaba ollama lm-studio openai anthropic multimodality reinforcement-learning long-context hybrid-attention on-device-ai model-deployment agent-reliability agent-observability coding-agents benchmarking runtime-optimization token-efficiency nrehiew_ kimmonismus lioronai danielhanchen theo htihle teortaxestex theprimeagen yuchenj_uw _lewtun saen_dev _philschmid omarsar0
Alibaba released the Qwen 3.5 series with models ranging from 0.8B to 9B parameters, featuring native multimodality, scaled reinforcement learning, and targeting edge and lightweight agent deployments. The models support very long context windows up to 262K tokens (extendable to 1M) and use a novel Gated DeltaNet hybrid attention architecture combining linear and full attention layers. Deployment examples include Ollama and LM Studio, with a notable 6-bit on-device demo on iPhone 17 Pro. Evaluators are cautioned that reasoning is disabled by default on smaller models. In coding agents, Codex 5.3 shows promising benchmark results on WeirdML with 79.3% accuracy, though availability and downtime remain critical challenges, especially highlighted by Claude outages. Agent reliability and observability are emphasized as cross-functional problems requiring clear success criteria and practical evaluation strategies. Studies show that using AGENTS.md and SKILL.md guardrails can significantly reduce runtime and token usage by mitigating worst-case thrashing in coding workflows.
Nano Banana 2 aka Gemini 3.1 Flash Image Preview: the new SOTA Imagegen model
gemini-3.1-flash gpt-5.2 gpt-5.3-codex opus-4.6 claude google google-deepmind microsoft anthropic perplexity-ai image-generation text-rendering 3d-imaging real-time-information agentic-ai persistent-memory multi-agent-systems tooling coding-agents task-delegation sundarpichai demishassabis mustafasuleyman yusuf_i_mehdi borisdayma aravsrinivas
Google and DeepMind launched Nano Banana 2 (aka Gemini 3.1 Flash Image Preview), a leading image generation and editing model integrated across multiple Google products with features like 4K upscaling, multi-subject consistency, and real-time search-conditioned generation. Evaluations rank it #1 in text-to-image tasks with competitive pricing. Additionally, advances in agentic coding are noted with models like GPT-5.2, GPT-5.3 Codex, Opus 4.6, and Gemini 3.1, alongside Microsoft's Copilot Tasks introducing task delegation. Persistent memory features are rolling out in Claude models, though interoperability challenges remain.
Agentic Engineering: WTF Happened in December 2025?
gpt-5.3-codex claude-code perplexity openai anthropic langchain-ai coding-agents agent-architecture distributed-workflows usage-based-pricing model-routing benchmarking context-length observability software-development karpathy aravsrinivas lioronai denisyarats swyx catwu hwchase17
Perplexity launched Computer, an orchestration-first agent platform featuring multi-model routing, usage-based pricing, and parallel asynchronous sub-agents for distributed workflows. Andrej Karpathy claims a "phase change" in coding agents since December, highlighting sustained long-horizon task completion. OpenAI released GPT-5.3-Codex with ~25% speed improvements and strong benchmark performance, while Claude Code celebrates its first year with ecosystem integrations and scaling challenges. This marks a significant shift in coding workflows and agent-based software development.
Anthropic accuses DeepSeek, Moonshot, and MiniMax of "industrial-scale distillation attacks".
claude claude-3 codex claude-code anthropic deepseek moonshot-ai minimax openai ollama api-abuse-resistance model-security agentic-engineering coding-agents model-distillation workflow-automation sandboxing realtime-communication simon_willison
Anthropic alleges industrial-scale distillation attacks on its Claude model by DeepSeek, Moonshot AI, and MiniMax, involving ~24,000 fraudulent accounts and >16M Claude exchanges to extract capabilities, raising concerns about competitive risks and safety. The community debates the difference between scraping and API-output extraction, highlighting a shift toward protecting models via API abuse resistance techniques. Meanwhile, coding agents like Codex and Claude Code see real adoption and failures, with emerging best practices in "agentic engineering" led by Simon Willison. The OpenClaw ecosystem expands with alternatives like NanoClaw and integrations such as Ollama 0.17 simplifying open model usage.
ElevenLabs $500m Series D at $11B, Cerebras $1B Series H at $23B, Vibe Coding -> Agentic Engineering
gemini-3 claude codex google openai github microsoft deepmind agent-frameworks model-deployment benchmarking cost-optimization software-development async-processing gpu-acceleration coding-agents user-adoption game-theory workflow-integration sama sundarpichai reach_vb
Google's Gemini 3 is being integrated widely, including a new Chrome side panel and Nano Banana UX features, with rapid adoption and a 78% unit-cost reduction in serving costs. The Gemini app reached 750M+ MAU in Q4 2025, nearing ChatGPT's user base. Google is also benchmarking AI "soft skills" through games like Poker and Chess in the Kaggle Game Arena. Meanwhile, coding agents are converging in IDEs: VS Code launched Agent Sessions supporting Claude and Codex agents with features like parallel subagents and integrated browsers. GitHub Copilot now allows agent choice between Claude and OpenAI Codex for async backlog clearing. OpenAI reports 1M+ active users for Codex with expanded integration surfaces, though some users request better GPU support. The coding-agent ecosystem is professionalizing with community platforms like OpenClaw and tooling such as ClawHub and CLI updates. "Gemini 3 adoption faster than any other model" and "VS Code as home for coding agents" highlight major industry shifts.
not much happened today
nouscoder-14b deepseek-r1 langchain cursor huggingface openai weights-biases agent-frameworks context-management reinforcement-learning operational-safety model-transparency trajectory-exploration token-optimization coding-agents integration-platforms karpathy _philschmid omarsar0
AI News for 1/6/2026-1/7/2026 highlights a quiet day with key updates on LangChain DeepAgents introducing Ralph Mode for persistent agent loops, Cursor improving context management by reducing token usage by 46.9%, and operational safety measures for coding agents with allow/deny lists. MCP integration is expanding across assistants and robotics, with Hugging Face embedding assistants via HuggingChat + HF MCP server. The DeepSeek-R1 paper has been expanded to 86 pages, emphasizing trajectory exploration and RL shaping behavior. NousCoder-14B shows a +7% improvement on LiveCodeBench after 4 days of RL training, demonstrating advances in RL for coding with small open models. Top tweets also mention a viral "96GB RAM laptop", ChatGPT Health launch by OpenAI, and Karpathy's nanochat scaling-law miniseries.
not much happened today
Poolside raised $1B at a $12B valuation. Eric Zelikman raised $1B after leaving Xai. Weavy joined Figma. New research highlights FP16 precision reduces training-inference mismatch in reinforcement-learning fine-tuning compared to BF16. Kimi AI introduced a hybrid KDA (Kimi Delta Attention) architecture improving long-context throughput and RL stability, alongside a new Kimi CLI for coding with agent protocol support. OpenAI previewed Agent Mode in ChatGPT enabling autonomous research and planning during browsing.
not much happened today
kimi-linear kimi-delta-attention minimax-m2 looped-llms aardvark-gpt-5 moonshot-ai minimax bytedance princeton mila openai cursor cognition hkust long-context attention-mechanisms agentic-ai tool-use adaptive-compute coding-agents performance-optimization memory-optimization reinforcement-learning model-architecture kimi_moonshot scaling01 uniartisan omarsar0 aicodeking songlinyang4 iscienceluvr nrehiew_ gdb embeddedsec auchenberg simonw
Moonshot AI released Kimi Linear (KDA) with day-0 infrastructure and strong long-context metrics, achieving up to 75% KV cache reduction and 6x decoding throughput. MiniMax M2 pivoted to full attention for multi-hop reasoning, maintaining strong agentic coding performance with 200k context and ~100 TPS. ByteDance, Princeton, and Mila introduced Looped LLMs showing efficiency gains comparable to larger transformers. OpenAI's Aardvark (GPT-5) entered private beta as an agentic security researcher for scalable vulnerability discovery. Cursor launched faster cloud coding agents, though transparency concerns arose regarding base-model provenance. Cognition released a public beta for a desktop/mobile tool-use agent named Devin. The community discussed advanced attention mechanisms and adaptive compute techniques.
not much happened today
claude-3-sonnet claude-3-opus gpt-5-codex grok-4-fast qwen-3-next gemini-2.5-pro sora-2-pro ray-3 kling-2.5 veo-3 modernvbert anthropic x-ai google google-labs openai arena epoch-ai mit luma akhaliq coding-agents cybersecurity api model-taxonomy model-ranking video-generation benchmarking multi-modal-generation retrieval image-text-retrieval finbarrtimbers gauravisnotme justinlin610 billpeeb apples_jimmy akhaliq
Anthropic announces a new CTO. Frontier coding agents see updates with Claude Sonnet 4.5 showing strong cybersecurity and polished UX but trailing GPT-5 Codex in coding capability. xAI Grok Code Fast claims higher edit success at lower cost. Google's Jules coding agent launches a programmable API with CI/CD integration. Qwen clarifies its model taxonomy and API tiers. Vision/LM Arena rankings show a tight competition among Claude Sonnet 4.5, Claude Opus 4.1, Gemini 2.5 Pro, and OpenAI's latest models. In video generation, Sora 2 Pro leads App Store rankings with rapid iteration and a new creator ecosystem; early tests show it answers GPQA-style questions at 55% accuracy versus GPT-5's 72%. Video Arena adds new models like Luma's Ray 3 and Kling 2.5 for benchmarking. Multi-modal video+audio generation model Ovi (Veo-3-like) is released. Retrieval models include ModernVBERT from MIT with efficient image-text retrieval capabilities. "Claude Sonnet 4.5 is basically the same as Opus 4.1 for coding" and "Jules is a programmable team member" highlight key insights.
not much happened today
gpt-5 kimi-k2-0905 glm-4.5 qwen3-asr opus-4.1 cognition founders-fund lux-capital 8vc neo vercel claude groq alibaba huggingface meta-ai-fair google theturingpost algoperf coding-agents agent-architecture open-source model-evaluation multilingual-models speech-recognition model-optimization kv-cache quantization algorithmic-benchmarking video-generation context-windows swyx tim_dettmers
Cognition raised $400M at a $10.2B valuation to advance AI coding agents, with swyx joining to support the "Decade of Agents" thesis. Vercel launched an OSS "vibe coding platform" using a tuned GPT-5 agent loop. Claude Code emphasizes minimalism in agent loops for reliability. Kimi K2-0905 achieved 94% on coding evals and improved agentic capabilities with doubled context length. Alibaba released Qwen3-ASR, a multilingual transcription model with <8% WER. Meta introduced Set Block Decoding for 3-5× faster decoding without architectural changes. Innovations in KV cache compression and quantization include AutoRound, QuTLASS v0.1.0, and AlgoPerf v0.6. Google's Veo 3 video generation API went GA with significant price cuts and vertical video support.
Cognition's $10b Series C; Smol AI updates
kimi-k2-0905 qwen3-asr gpt-5 cognition vercel meta-ai-fair alibaba groq huggingface coding-agents agent-development open-source model-evaluation multilingual-models inference-optimization kv-cache-compression quantization algorithmic-benchmarking context-length model-performance swyx
Cognition raised $400M at a $10.2B valuation to advance AI coding agents, with swyx joining the company. Vercel launched an OSS coding platform using a tuned GPT-5 agent loop. The Kimi K2-0905 model achieved top coding eval scores and improved agentic capabilities with doubled context length. Alibaba released Qwen3-ASR, a multilingual transcription model with robust noise handling. Meta introduced Set Block Decoding for 3-5× faster decoding without architectural changes. Innovations in KV cache compression and quantization were highlighted, including AutoRound in SGLang and QuTLASS v0.1.0 for Blackwell GPUs. Algorithmic benchmarking tools like AlgoPerf v0.6 were updated for efficiency.
not much happened today
claude-code gemini qwen3-coder gemini-2.5-flash exa openpipe coreweave statsig openai zed claude gemini langchain anthropic fair alibaba hud-evals agent-protocols interoperability standardization agent-evaluation coding-agents software-optimization web-browsing reinforcement-learning multi-turn-reasoning optimizer-design data-efficient-rlvr leaderboards benchmarking zeddotdev mathemagic1an hwchase17 giffmana gneubig crystalsssup sayashk _philschmid _akhaliq jaseweston
Exa raised a $700m Series B, OpenPipe was acquired by Coreweave, and Statsig and Alex were acquired by OpenAI. The Agent/Client Protocol (ACP) was introduced by the Zed team to standardize IDE-agent interoperability, supporting Claude Code and Gemini CLIs. LangChain 1.0 alpha unifies content blocks for reasoning and multimodal data. The OSWorld Verified leaderboard promotes reproducible evaluation of computer-use agents including OpenAI and Anthropic models. FAIR revealed coding agent cheating on SWE-Bench Verified. PR Arena hosts live coding agent competitions. Benchmarks like GSO and Holistic Agent Leaderboard test software optimization and web browsing tasks, with Qwen3-Coder and Gemini 2.5 Flash showing strong performance. Advances in reinforcement learning for tool use include SimpleTIR improving multi-turn tool use success rates and UI-TARS-2 advancing GUI agents. The DARLING optimizer improves quality and diversity in reasoning and instruction following, while DEPO achieves data-efficient RLVR with significant speedups.
Gemini's AlphaEvolve agent uses Gemini 2.0 to find new Math and cuts Gemini cost 1% — without RL
gemini gpt-4.1 gpt-4o-mini o3 o4-mini google-deepmind openai algorithm-discovery coding-agents matrix-multiplication optimization reinforcement-learning model-weights training-efficiency safety-evaluations instruction-following coding-tasks model-releases _philschmid scott_swingle alex_dimakis henry jason_wei kevinweil michpokrass scaling01 gdb
Deepmind's AlphaEvolve, a 2025 update to AlphaTensor and FunSearch, is a Gemini-powered coding agent for algorithm discovery that designs faster matrix multiplication algorithms, solves open math problems, and improves data center and AI training efficiency. It achieves a 23% faster kernel speedup in Gemini training and surpasses state-of-the-art on 20% of applied problems, including improvements on the Minimum Overlap Problem and Kissing number problem. Unlike Deep-RL, it optimizes code pieces rather than model weights. Meanwhile, OpenAI released GPT-4.1 in ChatGPT, specializing in coding and instruction following, with a faster alternative GPT-4.1 mini replacing GPT-4o mini for all users. OpenAI also launched the Safety Evaluations Hub and the OpenAI to Z Challenge using o3/o4 mini and GPT-4.1 models to discover archaeological sites. "Maybe midtrain + good search is all you need for AI for scientific innovation" - Jason Wei.
OpenAI o3, o4-mini, and Codex CLI
o3 o4-mini gemini-2.5-pro claude-3-sonnet chatgpt openai reinforcement-learning performance vision tool-use open-source coding-agents model-benchmarking multimodality scaling inference sama aidan_mclau markchen90 gdb aidan_clark_ kevinweil swyx polynoamial scaling01
OpenAI launched the o3 and o4-mini models, emphasizing improvements in reinforcement-learning scaling and overall efficiency, making o4-mini cheaper and better across prioritized metrics. These models showcase enhanced vision and tool use capabilities, though API access for these features is pending. The release includes Codex CLI, an open-source coding agent that integrates with these models to convert natural language into working code. Accessibility extends to ChatGPT Plus, Pro, and Team users, with o3 being notably more expensive than Gemini 2.5 Pro. Performance benchmarks highlight the intelligence gains from scaling inference, with comparisons against models like Sonnet and Gemini. The launch has been well received despite some less favorable evaluation results.
not much happened today
phi-4 reinforce++ arc-agi-2 ai21-labs ollama langchain togethercompute groq reinforcement-learning ppo model-optimization memory-efficiency python-packages vision text-extraction frontend-code-generation workflow-automation coding-agents compute-cost-reduction ethical-ai agi-benchmarks scam-alerts sebastien-bubeck fchollet tom-doerr arohan_ bindureddy hwchase17 jonathanross321 clementdelangue vikhyatk
Sebastien Bubeck introduced REINFORCE++, enhancing classical REINFORCE with PPO-inspired techniques for 30% faster training. AI21 Labs released Phi-4 under the MIT License, accessible via Ollama. François Chollet announced plans for ARC-AGI-2 and a next-generation AGI benchmark. LangChain launched 10 new integration packages to boost LLM application development. Tom Doerr introduced Ollama-OCR, a Python package for text extraction using vision language models. Arohan optimized Shampoo for memory efficiency, reducing usage from 20 to 6 bytes per parameter. Bindu Reddy showcased CodeLLM's v1 for frontend code generation and highlighted LlamaIndex Workflows for academic summarization and slide generation. Hwchase17 collaborated with Together Compute to enhance WebDev Arena with complex coding agents for LLM coding evaluations. Jonathan Ross detailed Groq's mission to reduce compute costs by 1000x amid rising generative AI spending. Clement Delangue warned about scam alerts involving false claims of association with AI21. Vikhyat K raised concerns about the ethical implications and trade-offs of AGI. Memes and humor included creative AI prompts and critiques of LLM behaviors.
ReALM: Reference Resolution As Language Modeling
flan-t5 gpt-4 apple openai hugging-face stability-ai reference-resolution finetuning quantization retrieval-augmented-generation open-source coding-agents podcast-generation image-generation ai-industry-trends takuto-takizawa
Apple is advancing in AI with a new approach called ReALM: Reference Resolution As Language Modeling, which improves understanding of ambiguous references using three contexts and finetunes a smaller FLAN-T5 model that outperforms GPT-4 on this task. In Reddit AI news, an open-source coding agent SWE-agent achieves 12.29% on the SWE-bench benchmark, and RAGFlow introduces a customizable retrieval-augmented generation engine. A new quantization method, QuaRot, enables efficient 4-bit inference. AI applications include a t-shirt design generator, podgenai for GPT-4 based podcast generation, and an open-source model from HuggingFace that runs without a GPU. Industry discussions focus on the impact of large language models on the AI field and efforts to decentralize AI development. Takuto Takizawa joins Stability AI Japan as Head of Sales & Partnerships.