All tags
Topic: "pricing"
OpenAI Dev Day: Apps SDK, AgentKit, Codex GA, GPT‑5 Pro and Sora 2 APIs
gpt-5-pro gpt-realtime-mini-2025-10-06 gpt-audio-mini-2025-10-06 gpt-image-1-mini sora-2 sora-2-pro openai canva figma zillow coursera api model-release fine-tuning agentic-ai code-generation model-deployment pricing prompt-optimization software-development multimodality sama edwinarbus gdb dbreunig stevenheidel
OpenAI showcased major product launches at their DevDay including the Apps SDK, AgentKit, and Codex now generally available with SDK and enterprise features. They introduced new models such as gpt-5-pro, gpt-realtime-mini-2025-10-06, gpt-audio-mini-2025-10-06, gpt-image-1-mini, and sora-2 with a pro variant. The Apps SDK enables embedding interactive apps inside ChatGPT with partners like Canva, Figma, Zillow, and Coursera. AgentKit offers a full stack for building and deploying production agents with tools like ChatKit and Guardrails. Codex supports speech and controller-driven coding, credited with high internal shipping velocity. Pricing for GPT-5 Pro was revealed at $15 input and $120 output per million tokens. "OpenAI turned ChatGPT into an application platform" and "AgentKit built a working agent in under 8 minutes" were highlights.
GDPVal finding: Claude Opus 4.1 within 95% of AGI (human experts in top 44 white collar jobs)
claude-4.1-opus gpt-5-high gptnext gemini-2.5-flash gemini-2.5-flash-lite deepseek-v3.1-terminus google-chirp-2 qwen-2.5b openai anthropic google nvidia artificial-analysis deepseek benchmarking agentic-ai tool-use long-context speech-to-text model-evaluation reasoning pricing model-performance kevinweil gdb dejavucoder yuchenj_uw lhsummers
OpenAI's Evals team released GDPval, a comprehensive evaluation benchmark covering 1,320 tasks across 44 predominantly digital occupations, assessing AI models against human experts with 14 years average experience. Early results show Claude 4.1 Opus outperforming human experts in most categories and GPT-5 high trailing behind, with projections that GPTnext could match human performance by mid-2026. The benchmark is positioned as a key metric for policymakers and labor impact forecasting. Additionally, Artificial Analysis reported improvements in Gemini 2.5 Flash/Flash-Lite and DeepSeek V3.1 Terminus models, alongside new speech-to-text benchmarks (AA-WER) highlighting leaders like Google Chirp 2 and NVIDIA Canary Qwen2.5B. Agentic AI advances include Kimi OK Computer, an OS-like agent with extended tool capabilities and new vendor verification tools.
Execuhires Round 2: Scale-Meta, Lamini-AMD, and Instacart-OpenAI
o3-pro o3 o1-pro gpt-4o gpt-4.1 gpt-4.1-mini gpt-4.1-nano meta-ai-fair scale-ai lamini amd openai gemini google anthropic model-release benchmarking reasoning fine-tuning pricing model-performance direct-preference-optimization complex-problem-solving alexandr_wang sharon_zhou fidji_simo sama jack_rae markchen90 kevinweil gdb gregkamradt lechmazur wesrothmoney paul_cal imjaredz cto_junior johnowhitaker polynoamial scaling01
Meta hires Scale AI's Alexandr Wang to lead its new "Superintelligence" division following a $15 billion investment for a 49% stake in Scale. Lamini's Sharon Zhou joins AMD as VP of AI under Lisa Su, while Instacart's Fidji Simo becomes CEO of Apps at OpenAI under Sama. Meta offers over $10 million/year compensation packages to top researchers, successfully recruiting Jack Rae from Gemini. OpenAI releases o3-pro model to ChatGPT Pro users and API, outperforming o3 and setting new benchmarks like Extended NYT Connections and SnakeBench. Despite being slower than o1-pro, o3-pro excels in reasoning and complex problem-solving. OpenAI cuts o3 pricing by 80%, making it cheaper than GPT-4o and pressuring competitors like Google and Anthropic to lower prices. Users can now fine-tune the GPT-4.1 family using direct preference optimization (DPO) for subjective tasks.
not much happened today
gpt-4.5 gpt-4 gpt-4o o1 claude-3.5-sonnet claude-3.7 claude-3-opus deepseek-v3 grok-3 openai anthropic perplexity-ai deepseek scaling01 model-performance humor emotional-intelligence model-comparison pricing context-windows model-size user-experience andrej-karpathy jeremyphoward abacaj stevenheidel yuchenj_uw aravsrinivas dylan522p random_walker
GPT-4.5 sparked mixed reactions on Twitter, with @karpathy noting users preferred GPT-4 in a poll despite his personal favor for GPT-4.5's creativity and humor. Critics like @abacaj highlighted GPT-4.5's slowness and questioned its practical value and pricing compared to other models. Performance-wise, GPT-4.5 ranks above GPT-4o but below o1 and Claude 3.5 Sonnet, with Claude 3.7 outperforming it on many tasks yet GPT-4.5 praised for its humor and "vibes." Speculation about GPT-4.5's size suggests around 5 trillion parameters. Discussions also touched on pricing disparities, with Perplexity Deep Research at $20/month versus ChatGPT at $200/month. The emotional intelligence and humor of models like Claude 3.7 were also noted.
Not much happened today
grok-beta llama-3-1-70b claude-3-5-haiku claude-3-opus llama-3 chatgpt gemini meta-ai-fair scale-ai anthropic perplexity-ai langchainai weights-biases qwen pricing national-security defense open-source agentic-ai retrieval-augmented-generation election-predictions real-time-updates annotation ai-ecosystem memes humor alexandr_wang svpino aravsrinivas bindureddy teortaxestex jessechenglyu junyang-lin cte_junior jerryjliu0
Grok Beta surpasses Llama 3.1 70B in intelligence but is less competitive due to its pricing at $5/1M input tokens and $15/1M output tokens. Defense Llama, developed with Meta AI and Scale AI, targets American national security applications. SWE-Kit, an open-source framework, supports building customizable AI software engineers compatible with Llama 3, ChatGPT, and Claude. LangChainAI and Weights & Biases integrate to improve retrievers and reduce hallucinations in RAG applications using Gemini. Perplexity AI offers enhanced election tracking tools for the 2024 elections, including live state results and support for Claude 3.5 Haiku. AI Talk launched featuring discussions on Chinese AI labs with guests from Qwen. Memes highlight Elon Musk and humorous AI coding mishaps.