All tags
Topic: "model-expansion"
Anthropic's Claude Opus 4.7
claude-opus-4.7 codex gpt-rosalind anthropic openai cursor replit perplexity-ai microsoft coding agentic-ai tokenization long-context benchmarking image-processing software-engineering computer-use plugin-integration multi-terminal-support ssh-access model-expansion bcherny kimmonismus scaling01 valsai artificialanlys natolambert nrehiew_
Anthropic launched Claude Opus 4.7, its most capable Opus model yet, featuring stronger coding and agentic performance, a new tokenizer, and improved long-context handling with a new xhigh reasoning tier. Benchmarks show substantial gains, including SWE-bench Pro 64.3%, SWE-bench Verified 87.6%, and TerminalBench 69.4%, with top rankings on Vals Index and GDPval-AA. Technical changes include a new tokenizer and increased image input resolution to 3.75MP. Some long-context benchmarks showed mixed results, with a shift in focus from MRCR to Graphwalks. Adoption was rapid across tools like Cursor, VS Code, Replit Agent, and Perplexity. Meanwhile, OpenAI expanded Codex into a broader computer agent with Mac computer use, in-app browser, image generation/editing, 90+ plugins, multi-terminal support, SSH remote devbox access, and richer file previews. A new vertical life-sciences model, GPT-Rosalind, was also introduced.
1/6-7/2024: LlaMA Pro - an alternative to PEFT/RAG??
llama-3 llama-3-1-1b llama-3-8-3b gpt-4 gpt-3.5 dall-e openai mistral-ai llamaindex langchain fine-tuning model-expansion token-limits privacy multilinguality image-generation security custom-models model-training yannic-kilcher
New research papers introduce promising Llama Extensions including TinyLlama, a compact 1.1B parameter model pretrained on about 1 trillion tokens for 3 epochs, and LLaMA Pro, an 8.3B parameter model expanding LLaMA2-7B with additional training on 80 billion tokens of code and math data. LLaMA Pro adds layers to avoid catastrophic forgetting and balances language and code tasks but faces scrutiny for not using newer models like Mistral or Qwen. Meanwhile, OpenAI Discord discussions reveal insights on GPT-4 token limits, privacy reassurances, fine-tuning for GPT-3.5, challenges with multi-language image recognition, custom GPT creation requiring ChatGPT Plus, and security concerns in GPT deployment. Users also share tips on dynamic image generation with DALL-E and logo creation.