All tags
Topic: "external-memory"
not much happened today
claude codex langsmith-engine smithdb duet-agent multi-stream-llm delta-mem star-elastic cline langchain notion cursor nous-research nvidia datology agent-infrastructure developer-platforms observability long-running-state streaming orchestration pretraining-efficiency model-architecture external-memory post-training-compression data-curation vision-language-models jonas_geiping siddharth_joshi pratyush_maini
Cline, LangChain, Notion, and Cursor advanced agent infrastructure and developer platforms with innovations like Cline SDK, LangSmith Engine, SmithDB (offering 12–15× faster observability), and Notion's External Agents API integrating third-party agents such as Claude and Codex. Agent UX trends emphasize long-running state, streaming, and orchestration over chat, with tools like Duet Agent and VS Code Agents window enhancing durable execution and inspectable states. Research highlights include Nous Research's Token Superposition Training achieving 2–3× speedup in pretraining, a multi-stream LLM architecture for parallel reasoning by Jonas Geiping et al., and δ-mem external memory improving benchmark scores. NVIDIA's Star Elastic offers post-training model compression at 360× lower cost than pretraining, while Datology focuses on data curation for vision-language models.
Contextual Position Encoding (CoPE)
cope gemini-1.5-flash gemini-1.5-pro claude gpt-3 meta-ai-fair google-deepmind anthropic perplexity-ai langchain openai positional-encoding transformers counting copying language-modeling coding external-memory tool-use model-evaluation inference-speed model-benchmarking scaling research-synthesis jason-weston alexandr-wang karpathy arav-srinivas
Meta AI researcher Jason Weston introduced CoPE, a novel positional encoding method for transformers that incorporates context to create learnable gates, enabling improved handling of counting and copying tasks and better performance on language modeling and coding. The approach can potentially be extended with external memory for gate calculation. Google DeepMind released Gemini 1.5 Flash and Pro models optimized for fast inference. Anthropic announced general availability of tool use for Claude, enhancing its ability to orchestrate tools for complex tasks. Alexandr Wang launched SEAL Leaderboards for private, expert evaluations of frontier models. Karpathy reflected on the 4th anniversary of GPT-3, emphasizing scaling and practical improvements. Perplexity AI launched Perplexity Pages to convert research into visually appealing articles, described as an "AI Wikipedia" by Arav Srinivas.