All tags
Person: "danhendrycks"
gpt-image-1 - ChatGPT's imagegen model, confusingly NOT 4o, now available in API
gpt-image-1 o3 o4-mini gpt-4.1 eagle-2.5-8b gpt-4o qwen2.5-vl-72b openai nvidia hugging-face x-ai image-generation content-moderation benchmarking long-context multimodality model-performance supercomputing virology video-understanding model-releases kevinweil lmarena_ai _philschmid willdepue arankomatsuzaki epochairesearch danhendrycks reach_vb mervenoyann _akhaliq
OpenAI officially launched the gpt-image-1 API for image generation and editing, supporting features like alpha channel transparency and a "low" content moderation policy. OpenAI's models o3 and o4-mini are leading in benchmarks for style control, math, coding, and hard prompts, with o3 ranking #1 in several categories. A new benchmark called Vending-Bench reveals performance variance in LLMs on extended tasks. GPT-4.1 ranks in the top 5 for hard prompts and math. Nvidia's Eagle 2.5-8B matches GPT-4o and Qwen2.5-VL-72B in long-video understanding. AI supercomputer performance doubles every 9 months, with xAI's Colossus costing an estimated $7 billion and the US dominating 75% of global performance. The Virology Capabilities Test shows OpenAI's o3 outperforms 94% of expert virologists. Nvidia also released the Describe Anything Model (DAM), a multimodal LLM for detailed image and video captioning, now available on Hugging Face.
GPT 4.1: The New OpenAI Workhorse
gpt-4.1 gpt-4.1-mini gpt-4.1-nano gpt-4o gemini-2.5-pro openai llama-index perplexity-ai google-deepmind coding instruction-following long-context benchmarks model-pricing model-integration model-deprecation sama kevinweil omarsar0 aidan_mclau danhendrycks polynoamial scaling01 aravsrinivas lmarena_ai
OpenAI released GPT-4.1, including GPT-4.1 mini and GPT-4.1 nano, highlighting improvements in coding, instruction following, and handling long contexts up to 1 million tokens. The model achieves a 54 score on SWE-bench verified and shows a 60% improvement over GPT-4o on internal benchmarks. Pricing for GPT-4.1 nano is notably low at $0.10/1M input and $0.40/1M output. GPT-4.5 Preview is being deprecated in favor of GPT-4.1. Integration support includes Llama Index with day 0 support. Some negative feedback was noted for GPT-4.1 nano. Additionally, Perplexity's Sonar API ties with Gemini-2.5 Pro for the top spot in the LM Search Arena leaderboard. New benchmarks like MRCR and GraphWalks were introduced alongside updated prompting guides and cookbooks.
small news items
gpt-4.5 gpt-5 deepseek-r1-distilled-qwen-1.5b o1-preview modernbert-0.3b qwen-0.5b o3 openai ollama mistral perplexity cerebras alibaba groq bytedance math benchmarking fine-tuning model-performance reinforcement-learning model-architecture partnerships funding jeremyphoward arankomatsuzaki sama nrehiew_ danhendrycks akhaliq
OpenAI announced plans for GPT-4.5 (Orion) and GPT-5, with GPT-5 integrating the o3 model and offering unlimited chat access in the free tier. DeepSeek R1 Distilled Qwen 1.5B outperforms OpenAI's o1-preview on math benchmarks, while ModernBERT 0.3b surpasses Qwen 0.5b at MMLU without fine-tuning. Mistral and Perplexity adopt Cerebras hardware for 10x performance gains. OpenAI's o3 model won a gold medal at the 2024 International Olympiad in Informatics. Partnerships include Qwen with Groq. Significant RLHF activity is noted in Nigeria and the global south, and Bytedance is expected to rise in AI prominence soon. "GPT5 is all you need."
not much happened today
zonos-v0.1 audiobox-aesthetics moshi sonar llama-3-70b gpt-4o-mini claude-3.5-haiku gpt-4o claude-3.5-sonnet deepseek-r1-distilled-qwen-1.5b reasonflux-32b o1-preview zyphra-ai meta-ai-fair kyutai-labs perplexity-ai cerebras uc-berkeley brilliant-labs google-deepmind text-to-speech speech-to-speech benchmarking model-performance reinforcement-learning math real-time-processing open-source cross-platform-integration multilinguality zero-shot-learning danhendrycks
Zyphra AI launched Zonos-v0.1, a leading open-weight text-to-speech model supporting multiple languages and zero-shot voice cloning. Meta FAIR released the open-source Audiobox Aesthetics model trained on 562 hours of audio data. Kyutai Labs introduced Moshi, a real-time speech-to-speech system with low latency. Perplexity AI announced the Sonar model based on Llama 3.3 70b, outperforming top models like GPT-4o and Claude 3.5 Sonnet with 1200 tokens/second speed, powered by Cerebras infrastructure. UC Berkeley open-sourced a 1.5B model trained with reinforcement learning that beats o1-preview on math tasks. ReasonFlux-32B achieved 91.2% on the MATH benchmark, outperforming OpenAI o1-preview. CrossPoster, an AI agent for cross-platform posting, was released using LlamaIndex workflows. Brilliant Labs integrated the Google DeepMind Gemini Live API into smart glasses for real-time translation and object identification.
OpenAI takes on Gemini's Deep Research
o3 o3-mini-high o3-deep-research-mini openai google-deepmind nyu uc-berkeley hku reinforcement-learning benchmarking inference-speed model-performance reasoning test-time-scaling agent-design sama danhendrycks ethan-mollick dan-shipper
OpenAI released the full version of the o3 agent, with a new Deep Research variant showing significant improvements on the HLE benchmark and achieving SOTA results on GAIA. The release includes an "inference time scaling" chart demonstrating rigorous research, though some criticism arose over public test set results. The agent is noted as "extremely simple" and currently limited to 100 queries/month, with plans for a higher-rate version. Reception has been mostly positive, with some skepticism. Additionally, advances in reinforcement learning were highlighted, including a simple test-time scaling technique called budget forcing that improved reasoning on math competitions by 27%. Researchers from Google DeepMind, NYU, UC Berkeley, and HKU contributed to these findings. The original Gemini Deep Research team will participate in the upcoming AI Engineer NYC event.
DeepSeek #1 on US App Store, Nvidia stock tanks -17%
deepseek-r1 deepseek-v3 qwen2.5-vl o1 deepseek openai nvidia langchain moe-architecture chain-of-thought fp8-precision multimodality vision agentic-ai inference-scaling gpu-optimization model-efficiency ai-chatbots memory-integration tool-use stock-market-reactions sama mervenoyann omarasar0 teortaxestex nptacek carpeetti finbarrtimbers cwolferesearch arthurrapier danhendrycks scaling01 janusflow
DeepSeek has made a significant cultural impact by hitting mainstream news unexpectedly in 2025. The DeepSeek-R1 model features a massive 671B parameter MoE architecture and demonstrates chain-of-thought (CoT) capabilities comparable to OpenAI's o1 at a lower cost. The DeepSeek V3 model trains a 236B parameter model 42% faster than its predecessor using fp8 precision. The Qwen2.5 multimodal models support images and videos with sizes ranging from 3B to 72B parameters, featuring strong vision and agentic capabilities. LangChain and LangGraph integration enable AI chatbots with memory and tool use, including applications like the DeFi Agent. Discussions highlight NVIDIA's role in hardware acceleration, with concerns about stock drops due to DeepSeek's efficiency and market fears. The compute demand is expected to rise despite efficiency gains, driven by inference scaling and MoE design improvements.
not much happened today
llama-3.1-nemotron-70b golden-gate-claude embed-3 liquid-ai anthropic cohere openai meta-ai-fair nvidia perplexity-ai langchain kestra ostrisai llamaindex feature-steering social-bias multimodality model-optimization workflow-orchestration inference-speed event-driven-workflows knowledge-backed-agents economic-impact ai-national-security trust-dynamics sam-altman lmarena_ai aravsrinivas svpino richardmcngo ajeya_cotra tamaybes danhendrycks jerryjliu0
Liquid AI held a launch event introducing new foundation models. Anthropic shared follow-up research on social bias and feature steering with their "Golden Gate Claude" feature. Cohere released multimodal Embed 3 embeddings models following Aya Expanse. There was misinformation about GPT-5/Orion debunked by Sam Altman. Meta AI FAIR announced Open Materials 2024 with new models and datasets for inorganic materials discovery using the EquiformerV2 architecture. Anthropic AI demonstrated feature steering to balance social bias and model capabilities. NVIDIA's Llama-3.1-Nemotron-70B ranked highly on the Arena leaderboard with style control. Perplexity AI expanded to 100M weekly queries with new finance and reasoning modes. LangChain emphasized real application integration with interactive frame interpolation. Kestra highlighted scalable event-driven workflows with open-source YAML-based orchestration. OpenFLUX optimized inference speed by doubling it through guidance LoRA training. Discussions on AI safety included trust dynamics between humans and AI, economic impacts of AI automation, and the White House AI National Security memo addressing cyber and biological risks. LlamaIndex showcased knowledge-backed agents for enhanced AI applications.
not much happened today + AINews Podcast?
superforecaster-ai llama-3 reflection-70b glean sambanova cerebras stanford google apple hugging-face lmsys prompt-engineering research-ideas inference-speed retrieval-augmented-generation evaluation-methods visual-intelligence on-device-ai model-performance benchmarking novelty-detection danhendrycks benjamin-clavie bclavie bindureddy swyx borismpower corbtt drjimfan clementdelangue rohanpaul_ai
Glean doubled its valuation again. Dan Hendrycks' Superforecaster AI generates plausible election forecasts with interesting prompt engineering. A Stanford study found that LLM-generated research ideas are statistically more novel than those by expert humans. SambaNova announced faster inference for llama-3 models, surpassing Cerebras. Benjamin Clavie gave a notable talk on retrieval-augmented generation techniques. Strawberry is reported to launch in two weeks. Google Illuminate offers AI-generated podcast discussions about papers and books. Apple unveiled new AI features in iOS 18, including visual intelligence and improved Siri, with on-device and cloud processing for camera-based event additions. The Reflection 70B model sparked controversy over performance claims. Experts highlighted the unreliability of traditional benchmarks like MMLU and HumanEval, recommending alternative evaluation methods such as LMSys Chatbot Arena and Hugging Face's open-sourced Lighteval suite. The AI research community continues to explore AI's role in generating novel research ideas and improving benchmarking.
super quiet day
jamba-1.5 phi-3.5 dracarys llama-3-1-70b llama-3-1 ai21-labs anthropic stanford hugging-face langchain qdrant aws elastic state-space-models long-context benchmarking ai-safety virtual-environments multi-agent-systems resource-management community-engagement model-performance bindu-reddy rohanpaul_ai jackclarksf danhendrycks reach_vb iqdotgraph
AI21 Labs released Jamba 1.5, a scaled-up State Space Model optimized for long context windows with 94B parameters and up to 2.5X faster inference, outperforming models like Llama 3.1 70B on benchmarks. The Phi-3.5 model was praised for its safety and performance, while Dracarys, a new 70B open-source coding model announced by Bindu Reddy, claims superior benchmarks over Llama 3.1 70B. Discussions on California's SB 1047 AI safety legislation involve Stanford and Anthropic, highlighting a balance between precaution and industry growth. Innovations include uv virtual environments for rapid setup, LangChain's LangSmith resource tags for project management, and multi-agent systems in Qdrant enhancing data workflows. Community events like the RAG workshop by AWS, LangChain, and Elastic continue to support AI learning and collaboration. Memes remain a popular way to engage with AI industry culture.
Problems with MMLU-Pro
mmlu-pro llama-3-8b-q8 gpt4all-3.0 chatgpt claude llama gemini mobilellm runway-gen-3-alpha meta-3d-gen huggingface meta-ai-fair salesforce runway nomic-ai pineapple argil-ai benchmarking prompt-engineering model-evaluation model-performance multimodality automated-dataset-generation video-generation open-source-models ai-assistants text-to-3d deepfake transformers reasoning wenhu-chen danhendrycks clementine ylecun adcock_brett svpino rohanpaul_ai
MMLU-Pro is gaining attention as the successor to MMLU on the Open LLM Leaderboard V2 by HuggingFace, despite community concerns about evaluation discrepancies and prompt sensitivity affecting model performance, notably a 10-point improvement in Llama-3-8b-q8 with simple prompt tweaks. Meta's MobileLLM research explores running sub-billion parameter LLMs on smartphones using shared weights and deeper architectures. Salesforce's APIGen introduces an automated dataset generation system for function-calling tasks outperforming larger models. Runway Gen-3 Alpha launches an AI video generator for paid users creating realistic 10-second clips. Nomic AI's GPT4All 3.0 offers an open-source desktop app supporting thousands of local models. AI assistants with multimodal capabilities and affordable access to multiple LLMs like ChatGPT, Claude, Llama, and Gemini are emerging. Meta 3D Gen advances text-to-3D asset generation, while Argil AI enables deepfake video creation from text threads. Research on transformer grokking and reasoning highlights advances in robust reasoning capabilities.