All tags
Company: "claude-ai"
Anthropic launches the MCP Apps open spec, in Claude.ai
claude-ai toolorchestra-8b qwen3-max-thinking anthropic openai block vs-code antigravity jetbrains aws nvidia alibaba claude-ai agent-orchestration reinforcement-learning recursive-language-models context-management user-experience security prompt-injection reasoning adaptive-tool-use model-evaluation benchmarking
Anthropic has officially absorbed the independent MCP UI project and, collaborating with OpenAI, Block, VS Code, Antigravity, JetBrains, and AWS, released the MCP Apps spec and official support in Claude.ai. This standard aims to enable a rich ecosystem of interoperable applications with rich UI, addressing the proliferation of subscription services. Meanwhile, NVIDIA introduced ToolOrchestra with an 8B orchestrator model trained via scalable reinforcement learning for efficient agent orchestration. The concept of Recursive Language Models (RLMs) is gaining traction for efficient context management in agent stacks. The “Clawdbot” UX pattern emphasizes outcome-first assistant design with tight context and tool integration, sparking security concerns around prompt injection. Alibaba launched Qwen3-Max-Thinking, a flagship reasoning and agent model with adaptive tool use and strong benchmark scores, now available in public evaluation platforms like LM Arena and Yupp.
Claude 3 just destroyed GPT 4 (see for yourself)
claude-3 claude-3-opus claude-3-sonnet claude-3-haiku gpt-4 anthropic amazon google claude-ai multimodality vision long-context model-alignment model-evaluation synthetic-data structured-output instruction-following model-speed cost-efficiency benchmarking safety mmitchell connor-leahy
Claude 3 from Anthropic launches in three sizes: Haiku (small, unreleased), Sonnet (medium, default on claude.ai, AWS, and GCP), and Opus (large, on Claude Pro). Opus outperforms GPT-4 on key benchmarks like GPQA, impressing benchmark authors. All models support multimodality with advanced vision capabilities, including converting a 2-hour video into a blog post. Claude 3 offers improved alignment, fewer refusals, and extended context length up to 1 million tokens with near-perfect recall. Haiku is noted for speed and cost-efficiency, processing dense research papers in under three seconds. The models excel at following complex instructions and producing structured outputs like JSON. Safety improvements reduce refusal rates, though some criticism remains from experts. Claude 3 is trained on synthetic data and shows strong domain-specific evaluation results in finance, medicine, and philosophy.