All tags
Topic: "sandboxing"
Anthropic accuses DeepSeek, Moonshot, and MiniMax of "industrial-scale distillation attacks".
claude claude-3 codex claude-code anthropic deepseek moonshot-ai minimax openai ollama api-abuse-resistance model-security agentic-engineering coding-agents model-distillation workflow-automation sandboxing realtime-communication simon_willison
Anthropic alleges industrial-scale distillation attacks on its Claude model by DeepSeek, Moonshot AI, and MiniMax, involving ~24,000 fraudulent accounts and >16M Claude exchanges to extract capabilities, raising concerns about competitive risks and safety. The community debates the difference between scraping and API-output extraction, highlighting a shift toward protecting models via API abuse resistance techniques. Meanwhile, coding agents like Codex and Claude Code see real adoption and failures, with emerging best practices in "agentic engineering" led by Simon Willison. The OpenClaw ecosystem expands with alternatives like NanoClaw and integrations such as Ollama 0.17 simplifying open model usage.
not much happened today
gpt-5.3-codex claude-opus-4.6 nanochat-gpt-2 openai anthropic langchain agent-systems ai-engineering benchmarking software-organization sandboxing tracing state-management recursive-language-models context-management karpathy sama swyx omarsar0 hamelhusain deepfates
AI News for early February 2026 highlights a detailed comparison between GPT-5.3-Codex and Claude Opus 4.6, with users noting Codex's strength in detailed scoped tasks and Opus's ergonomic advantage for exploratory work. Benchmarks on Karpathy's nanochat GPT-2 speedrun show Opus 4.6 achieving better wall-clock performance, while Codex-5.3-xhigh sometimes suffers from context issues. Karpathy cautions that current models are not yet reliable for fully autonomous AI engineering. Discussions on agent swarms reveal emerging parallels to software organizational design, with Anthropic-style agent coordination systems and LangChain/LangSmith emphasizing environment engineering through tracing, sandboxing, and state control. The concept of Recursive Language Models (RLM) is introduced as a future direction for agent systems to reduce context rot and improve structured communication.
Anthropic Labs: Cowork, Claude Code, MCP, Skills incubator led by Mike Krieger and Ben Mann
claude claude-code anthropic langchain apple sandboxing agent-ux agent-orchestration human-in-the-loop memory-management tooling-simplification linux-virtualization security agent-productization mike_krieger ben_mann gergely_orosz yuchen_jin harrison_chase jared_z
Anthropic consolidates its AI agent products under the Cowork brand, integrating prior tools like Claude Code and Claude for Chrome into a unified agent with sandboxed Linux VM environments using Apple's virtualization and bubblewrap for security. Meanwhile, Anthropic Labs reorganizes with Mike Krieger stepping down as CPO, focusing on productizing Claude with a >$1B ARR agent lab. The AI community debates the meaning of "vibe coding," emphasizing disciplined engineer verification over casual coding. LangChain launches Agent Builder GA, offering no-code but powerful agent orchestration features like memory, triggers, and human-in-the-loop approvals. Some experts advocate simplifying agent tooling to core filesystem and bash access for efficiency. Open-source recreations of Cowork-like environments using QEMU and sandboxing tools highlight rapid commoditization of AI agent tech.