All tags
Company: "perplexity"
3x in 3 months: Cursor @ $28b, Cognition + Windsurf @ $10b
qwen3-coder chatgpt-agent claude-code mini cursor cognition windsurf alibaba openai anthropic perplexity agentic-ai fundraising software-engineering ai-coding agentic-economy model-integration community-feedback performance-benchmarking bindureddy xikun_zhang_ aravsrinivas gergelyorosz jeremyphoward
Cursor is reportedly fundraising at a $28 billion valuation with $1 billion ARR, while the combined Cognition+Windsurf entity is fundraising at a $10 billion valuation after acquiring Windsurf remainco for $300 million. The competition between AI coding agents intensifies as Cursor focuses on Async SWE Agents and Cognition+Windsurf acquires an agentic IDE. Alibaba's Qwen3-Coder gains widespread adoption for coding tasks and integration into tools like Claude Code and LM Studio. OpenAI rolls out ChatGPT Agent to all Plus, Pro, and Team users, sparking discussions about an "agentic economy" emphasizing AI literacy. Anthropic's Claude Code is praised as a premier development tool with active community feedback. Perplexity's Comet browser assistant receives positive reviews and new feature showcases. The debate continues on whether AI coding tools will replace developers, with critiques highlighting the ongoing human effort required. A new minimalistic software engineering agent, mini, achieves 65% on SWE-bench with just 100 lines of code.
not much happened today
grok-4 smollm3 t5gemma claude-3.7-sonnet deepseek-r1 langchain openai google-deepmind perplexity xai microsoft huggingface anthropic agentic-ai model-controversy open-source model-release alignment fine-tuning long-context multimodality model-research aravsrinivas clementdelangue _akhaliq
LangChain is nearing unicorn status, while OpenAI and Google DeepMind's Gemini 3 Pro models are launching soon. Perplexity rolls out its agentic browser Comet to waitlists, offering multitasking and voice command features. xAI's Grok-4 update sparked controversy due to offensive outputs, drawing comparisons to Microsoft's Tay bot and resulting in regional blocks. Hugging Face released SmolLM3, a 3B parameter open-source model with state-of-the-art reasoning and long context capabilities. Google introduced T5Gemma encoder-decoder models, a significant update in this model category. Anthropic investigates "alignment faking" in language models, focusing on safety concerns with models like Claude 3.7 Sonnet and DeepSeek-R1. "Grok 3 had high reasoning, Grok 4 has heil reasoning" was a notable user comment on the controversy.
small news items
gpt-4.5 gpt-5 deepseek-r1-distilled-qwen-1.5b o1-preview modernbert-0.3b qwen-0.5b o3 openai ollama mistral perplexity cerebras alibaba groq bytedance math benchmarking fine-tuning model-performance reinforcement-learning model-architecture partnerships funding jeremyphoward arankomatsuzaki sama nrehiew_ danhendrycks akhaliq
OpenAI announced plans for GPT-4.5 (Orion) and GPT-5, with GPT-5 integrating the o3 model and offering unlimited chat access in the free tier. DeepSeek R1 Distilled Qwen 1.5B outperforms OpenAI's o1-preview on math benchmarks, while ModernBERT 0.3b surpasses Qwen 0.5b at MMLU without fine-tuning. Mistral and Perplexity adopt Cerebras hardware for 10x performance gains. OpenAI's o3 model won a gold medal at the 2024 International Olympiad in Informatics. Partnerships include Qwen with Groq. Significant RLHF activity is noted in Nigeria and the global south, and Bytedance is expected to rise in AI prominence soon. "GPT5 is all you need."
not much happened today
claudette llama-3-1 yi-lightning gpt-4o claude-3.5-sonnet answer-ai tencent notebooklm motherduck perplexity dropbox openai meta-ai-fair yi-ai zyphra-ai anthropic langchain openai synthetic-data fine-tuning sql audio-processing on-device-ai dataset-release transformer llm-reasoning ai-safety code-generation ai-pricing ai-job-market fchollet aravsrinivas svpino swyx
Answer.ai launched fastdata, a synthetic data generation library using
claudette
and Tencent's Billion Persona paper. NotebookLM became customizable, and Motherduck introduced notable LLMs in SQL implementations. Perplexity and Dropbox announced competitors to Glean. OpenAI unveiled audio chat completions priced at 24 cents per minute. Meta AI released Llama 3.1, powering Lenovo AI Now's on-device agent. Yi-Lightning model ranked #6 globally, surpassing GPT-4o. Zyphra AI released the large Zyda-2 dataset with 5 trillion tokens. François Chollet clarified transformer architecture as set-processing, not sequence-processing. Research suggests memorization aids LLM reasoning. Anthropic updated its Responsible Scaling Policy for AI safety. Tools like Perplexity Finance, Open Canvas by LangChain, and AlphaCodium code generation tool were highlighted. Approximately $500 million was raised for AI agent startups, with ongoing discussions on AI's job market impact. Combining prompt caching with the Batches API can yield a 95% discount on Claude 3.5 Sonnet tokens. Mamba-2: State Space Duality
mamba-2 mamba transformer++ llama-3-70b gpt-3 hugging-face state-space-models perplexity training-efficiency data-pruning benchmarking multimodality video-analysis _albertgu tri_dao arankomatsuzaki _akhaliq clementdelangue karpathy
Mamba-2, a new state space model (SSM), outperforms previous models like Mamba and Transformer++ in perplexity and wall-clock time, featuring 8x larger states and 50% faster training. It introduces the concept of state space duality (SSD) connecting SSMs and linear attention. The FineWeb-Edu dataset, a high-quality subset of the 15 trillion token FineWeb dataset, filtered using llama-3-70b for educational quality, enables better and faster LLM learning, potentially reducing tokens needed to surpass GPT-3 performance. Additionally, perplexity-based data pruning using a 125M parameter model improves downstream performance and reduces pretraining steps by up to 1.45x. The Video-MME benchmark evaluates multi-modal LLMs on video analysis across multiple visual domains and video lengths.
Not much happened today
claude-3 claude-3-opus claude-3-sonnet gpt-4 gemma-2b anthropic perplexity langchain llamaindex cohere accenture mistral-ai snowflake together-ai hugging-face european-space-agency google gpt4all multimodality instruction-following out-of-distribution-reasoning robustness enterprise-ai cloud-infrastructure open-datasets model-deployment model-discoverability generative-ai image-generation
Anthropic released Claude 3, replacing Claude 2.1 as the default on Perplexity AI, with Claude 3 Opus surpassing GPT-4 in capability. Debate continues on whether Claude 3's performance stems from emergent properties or pattern matching. LangChain and LlamaIndex added support for Claude 3 enabling multimodal and tool-augmented applications. Despite progress, current models still face challenges in out-of-distribution reasoning and robustness. Cohere partnered with Accenture for enterprise AI search, while Mistral AI and Snowflake collaborate to provide LLMs on Snowflake's platform. Together AI Research integrates Deepspeed innovations to accelerate generative AI infrastructure. Hugging Face and the European Space Agency released a large earth observation dataset, and Google open sourced Gemma 2B, optimized for smartphones via the MLC-LLM project. GPT4All improved model discoverability for open models. The AI community balances excitement over new models with concerns about limitations and robustness, alongside growing enterprise adoption and open-source contributions. Memes and humor continue to provide social commentary.
1/4/2024: Jeff Bezos backs Perplexity's $520m Series B.
wizardcoder-33b-v1.1 mobilellama-1.4b-base shearedllama tinyllama mixtral-8x7b perplexity anthropic google nous-research mistral-ai hugging-face document-recall rnn-memory synthetic-data benchmarking multi-gpu-support context-length model-architecture sliding-window-attention model-parallelism gpu-optimization jeff-bezos
Perplexity announced their Series B funding round with notable investor Jeff Bezos, who previously invested in Google 25 years ago. Anthropic is raising $750 million, projecting at least $850 million in annualized revenue next year and implementing "brutal" changes to their Terms of Service. Discussions in Nous Research AI Discord cover topics such as document recall limits from gigabytes of data, RNN memory and compute trade-offs, synthetic datasets, and benchmarking of models like WizardCoder-33B-V1.1, MobileLLaMA-1.4B-Base, ShearedLLaMA, and TinyLLaMA. Other highlights include UnsLOTH optimizations for multi-GPU systems, AI rap voice models, context-extending code, and architectural innovations like applying Detectron/ViT backbones to LLMs, sliding window attention in Mistral, and parallelizing Mixtral 8x7b with FSDP and HF Accelerate.