All tags
Model: "gemini-1.5-pro"
not much happened today
gemini-1.5-pro claude-3 chatgpt langchain meta-ai-fair hugging-face openrouter google-ai microsoft openai anthropic agent-ops observability multi-turn-evaluation reinforcement-learning distributed-training api model-stability user-intent-clustering software-development project-management code-generation hwchase17 ankush_gola11 whinthorn koylanai _lewtun bhutanisanyam1 thom_wolf danielhanchen cline canvrno pashmerepat mustafasuleyman yusuf_i_mehdi jordirib1 fidjissimo bradlightcap mikeyk alexalbert__
LangSmith launched the Insights Agent with multi-turn evaluation for agent ops and observability, improving failure detection and user intent clustering. Meta PyTorch and Hugging Face introduced OpenEnv, a Gymnasium-style API and hub for reproducible agentic environments supporting distributed training. Discussions highlighted the importance of provider fidelity in agent coding, with OpenRouter's exacto filter improving stability. Builder UX updates include Google AI Studio's Annotation mode for Gemini code changes, Microsoft's Copilot Mode enhancements in Edge, and OpenAI's Shared Projects and Company Knowledge features for ChatGPT Business. Claude added project-scoped Memory. In reinforcement learning, Meta's ScaleRL proposes a methodology to predict RL scaling outcomes for LLMs with improved efficiency and stability.
OAI and GDM announce IMO Gold-level results with natural language reasoning, no specialized training or tools, under human time limits
gemini-1.5-pro o1 openai google-deepmind reinforcement-learning reasoning model-scaling fine-tuning model-training benchmarking natural-language-processing terence_tao oriol_vinyals alexander_wei jerry_tworek paul_christiano eliezer_yudkowsky
OpenAI and Google DeepMind achieved a major milestone by solving 5 out of 6 problems at the International Mathematical Olympiad (IMO) 2025 within the human time limit of 4.5 hours, earning the IMO Gold medal. This breakthrough was accomplished using general-purpose reinforcement learning and pure in-weights reasoning without specialized tools or internet access, surpassing previous systems like AlphaProof and AlphaGeometry2. The success resolved a 3-year-old AI bet on AI's capability to solve IMO problems and sparked discussions among mathematicians including Terence Tao. Despite this, 26 human competitors remain better than AI on the hardest combinatorics problem (P6). The achievement highlights advances in reinforcement-learning, reasoning, and model-scaling in AI research.
OpenAI adopts MCP
gemini-2.5-pro gemini-1.5-pro gemini-2.0-flash qwen-2.5-omni-7b deepseek-v3-0324 deepseek-r1 openai google-deepmind alibaba togethercompute model-benchmarking multimodality reasoning scaling-laws model-quantization synthetic-data model-performance context-windows speech-recognition translation audio-processing video-processing swyx
OpenAI announced support for MCP, a significant technical update. Google's Gemini 2.5 Pro leads benchmarks with top scores in MMLU-Pro (86%), GPQA Diamond (83%), and AIME 2024 (88%), featuring a 1 million token context window and multimodal inputs. Alibaba's Qwen 2.5 Omni 7B was released as a fully multimodal, interactive, open-source model with a novel "thinker-talker" architecture supporting voice and video chat. DeepSeek V3-0324 outperforms its predecessor on multiple benchmarks. Research on reasoning features in large language models using sparse autoencoders was highlighted, alongside a study on scaling laws of synthetic data showing performance plateaus near 300B tokens. Discussions also covered the fastest output speeds of Gemini models and concerns about over-reliance on benchmarks for intelligence measurement. Swyx will curate the Data Council AI Engineering Track in April.
Gemma 3 beats DeepSeek V3 in Elo, 2.0 Flash beats GPT4o with Native Image Gen
gemma-3 gemini-1.5-pro gemini-2 o1-preview o3-mini-high deepseek-v3 claude-3.7-sonnet qwen-2.5-max google-deepmind openai multimodality multilinguality context-window quantization image-generation model-benchmarking model-performance vision reach_vb _philschmid danielhanchen lmarena_ai osanseviero
Google DeepMind launched the Gemma 3 family of models featuring a 128k context window, multimodal input (image and video), and multilingual support for 140+ languages. The Gemma 3-27B model ranks among the top open models on LMArena benchmarks, outperforming several competitors and matching Gemini-1.5-Pro on benchmarks. Additionally, Gemini 2 introduced Flash Native Image Generation with advanced image editing capabilities, a feature teased by OpenAI but not launched. The updates highlight significant advances in context length, multimodality, and model efficiency via quantization.
Gemini 2.0 Flash GA, with new Flash Lite, 2.0 Pro, and Flash Thinking
gemini-2.0-flash gemini-2.0-flash-lite gemini-2.0-pro-experimental gemini-1.5-pro deepseek-r1 gpt-2 llama-3-1 google-deepmind hugging-face anthropic multimodality context-windows cost-efficiency pretraining fine-tuning reinforcement-learning transformer tokenization embeddings mixture-of-experts andrej-karpathy jayalammar maartengr andrewyng nearcyan
Google DeepMind officially launched Gemini 2.0 models including Flash, Flash-Lite, and Pro Experimental, with Gemini 2.0 Flash outperforming Gemini 1.5 Pro while being 12x cheaper and supporting multimodal input and a 1 million token context window. Andrej Karpathy released a 3h31m video deep dive into large language models, covering pretraining, fine-tuning, and reinforcement learning with examples like GPT-2 and Llama 3.1. A free course on Transformer architecture was introduced by Jay Alammar, Maarten Gr, and Andrew Ng, focusing on tokenizers, embeddings, and mixture-of-expert models. DeepSeek-R1 reached 1.2 million downloads on Hugging Face with a detailed 36-page technical report. Anthropic increased rewards to $10K and $20K for their jailbreak challenge, while BlueRaven extension was updated to hide Twitter metrics for unbiased engagement.
Mistral Small 3 24B and Tulu 3 405B
mistral-small-3 tulu-3-405b llama-3 tiny-swallow-1.5b qwen-2.5-max deepseek-v3 claude-3.5-sonnet gemini-1.5-pro gpt4o-mini llama-3-3-70b mistral-ai ai2 sakana-ai alibaba_qwen deepseek ollama llamaindex reinforcement-learning model-fine-tuning local-inference model-performance model-optimization on-device-ai instruction-following api training-data natural-language-processing clementdelangue dchaplot reach_vb
Mistral AI released Mistral Small 3, a 24B parameter model optimized for local inference with low latency and 81% accuracy on MMLU, competing with Llama 3.3 70B, Qwen-2.5 32B, and GPT4o-mini. AI2 released Tülu 3 405B, a large finetuned model of Llama 3 using Reinforcement Learning from Verifiable Rewards (RVLR), competitive with DeepSeek v3. Sakana AI launched TinySwallow-1.5B, a Japanese language model using TAID for on-device use. Alibaba_Qwen released Qwen 2.5 Max, trained on 20 trillion tokens, with performance comparable to DeepSeek V3, Claude 3.5 Sonnet, and Gemini 1.5 Pro, and updated API pricing. These releases highlight advances in open models, efficient inference, and reinforcement learning techniques.
Google wakes up: Gemini 2.0 et al
gemini-2.0-flash gemini-1.5-pro gemini-exp-1206 claude-3.5-sonnet opus google-deepmind openai apple multimodality agent-development multilinguality benchmarking model-releases demis-hassabis sundar-pichai paige-bailey bindureddy
Google DeepMind launched Gemini 2.0 Flash, a new multimodal model outperforming Gemini 1.5 Pro and o1-preview, featuring vision and voice APIs, multilingual capabilities, and native tool use. It powers new AI agents like Project Astra and Project Mariner, with Project Mariner achieving state-of-the-art 83.5% on the WebVoyager benchmark. OpenAI announced ChatGPT integration with Apple devices, enabling Siri access and visual intelligence features. Claude 3.5 Sonnet is noted as a distilled version of Opus. The AI community's response at NeurIPS 2024 has been overwhelmingly positive, signaling a strong comeback for Google in AI innovation. Key topics include multimodality, agent development, multilinguality, benchmarking, and model releases.
GitHub Copilot Strikes Back
claude-3-5-sonnet gemini-1.5-pro o1-preview gemini-flash-8b github anthropic google-deepmind openai weights-biases model-picker-ui multi-model-integration natural-language-applications deployment-free-hosting model-prompting multimodal-observability audio-tracing codebase-optimization price-performance-ratio cassidy-williams fchollet rohanpaul_ai jxmnop
GitHub's tenth annual Universe conference introduced the Multi-model Copilot featuring Anthropic's Claude 3.5 Sonnet, Google's Gemini 1.5 Pro, and OpenAI's o1-preview models in a new picker UI, allowing developers to choose from multiple companies' models. The event also showcased GitHub Spark, an AI-native tool for building natural language applications with deployment-free hosting and integrated model prompting. Additionally, GitHub updated its Copilot Workspace with new agents and security Autofix features. Weights & Biases launched Weave with multimodal observability supporting audio, text, and images, integrating the OpenAI Realtime API. Twitter recaps highlighted tinygrad's codebase optimization and discussions on GenAI adoption and Gemini Flash-8B's cost efficiency at $0.0375 per million tokens.
not much happened today
aria o1-preview o1-mini gemini-1.5-pro gemini-1.5-flash gemini-1.5 claude-3.5-sonnet rhymes-ai openai anthropic google meta-ai-fair oxylabs multimodality mixture-of-experts long-context retrieval-augmented-generation benchmarking software-engineering llm-evaluation prompt-engineering web-scraping python production-applications mervenoyann osanseviero dbrxmosaicai ylecun ofirpress clefourrier omarsar0 rohanpaul_ai svpino finbarrtimbers _philschmid
Rhymes AI released Aria, a new 25.3B parameter multimodal MoE model supporting text, code, image, and video with a 64k token context window and Apache-2.0 license. OpenAI's o1-preview and o1-mini models show consistent improvement over Anthropic and Google Gemini 1.5 Pro/Flash on long context RAG benchmarks up to 128k tokens, while Google Gemini 1.5 models excel at extreme context lengths up to 2 million tokens. Meta AI expanded rollout to 21 countries with new language support but remains unavailable in the EU. The one-year anniversary of SWE-bench benchmark for software engineering tasks was celebrated, alongside the introduction of SWE-bench Multimodal. New AI tools include OxyCopilot by Oxylabs for web scraping, Taipy for Python-based production apps, and Latitude for prompt engineering. Industry insights highlight changing AI funding dynamics and OpenAI's strategic focus on consumer products like ChatGPT. "all recaps done by Claude 3.5 Sonnet, best of 4 runs."
Cerebras Inference: Faster, Better, AND Cheaper
llama-3.1-8b llama-3.1-70b gemini-1.5-flash gemini-1.5-pro cogvideox-5b mamba-2 rene-1.3b llama-3.1 gemini-1.5 claude groq cerebras cursor google-deepmind anthropic inference-speed wafer-scale-chips prompt-caching model-merging benchmarking open-source-models code-editing model-optimization jeremyphoward sam-altman nat-friedman daniel-gross swyx
Groq led early 2024 with superfast LLM inference speeds, achieving ~450 tokens/sec for Mixtral 8x7B and 240 tokens/sec for Llama 2 70B. Cursor introduced a specialized code edit model hitting 1000 tokens/sec. Now, Cerebras claims the fastest inference with their wafer-scale chips, running Llama3.1-8b at 1800 tokens/sec and Llama3.1-70B at 450 tokens/sec at full precision, with competitive pricing and a generous free tier. Google's Gemini 1.5 models showed significant benchmark improvements, especially Gemini-1.5-Flash and Gemini-1.5-Pro. New open-source models like CogVideoX-5B and Mamba-2 (Rene 1.3B) were released, optimized for consumer hardware. Anthropic's Claude now supports prompt caching, improving speed and cost efficiency. "Cerebras Inference runs Llama3.1 20x faster than GPU solutions at 1/5 the price."
Gemini Live
gemini-1.5-pro genie falcon-mamba gemini-1.5 llamaindex google anthropic tii supabase perplexity-ai llamaindex openai hugging-face multimodality benchmarking long-context retrieval-augmented-generation open-source model-releases model-integration model-performance software-engineering linear-algebra hugging-face-hub debugging omarsar0 osanseviero dbrxmosaicai alphasignalai perplexity_ai _jasonwei svpino
Google launched Gemini Live on Android for Gemini Advanced subscribers during the Pixel 9 event, featuring integrations with Google Workspace apps and other Google services. The rollout began on 8/12/2024, with iOS support planned. Anthropic released Genie, an AI software engineering system achieving a 57% improvement on SWE-Bench. TII introduced Falcon Mamba, a 7B attention-free open-access model scalable to long sequences. Benchmarking showed that longer context lengths do not always improve Retrieval-Augmented Generation. Supabase launched an AI-powered Postgres service dubbed the "ChatGPT of databases," fully open source. Perplexity AI partnered with Polymarket to integrate real-time probability predictions into search results. A tutorial demonstrated a multimodal recipe recommender using Qdrant, LlamaIndex, and Gemini. An OpenAI engineer shared success tips emphasizing debugging and hard work. The connection between matrices and graphs in linear algebra was highlighted for insights into nonnegative matrices and strongly connected components. Keras 3.5.0 was released with Hugging Face Hub integration for model saving and loading.
not much happened today
qwen2-math-72b gpt-4o claude-3.5-sonnet gemini-1.5-pro llama-3.1-405b idefics3-llama-8b anthropic google mistral-ai llamaindex math fine-tuning synthetic-data reinforcement-learning bug-bounty visual-question-answering open-source retrieval-augmented-generation agentic-ai ai-safety policy rohanpaul_ai anthropicai mervenoyann jeremyphoward omarsar0 ylecun bindureddy
Qwen2-Math-72B outperforms GPT-4o, Claude-3.5-Sonnet, Gemini-1.5-Pro, and Llama-3.1-405B on math benchmarks using synthetic data and advanced optimization techniques. Google AI cuts pricing for Gemini 1.5 Flash by up to 78%. Anthropic expands its bug bounty program targeting universal jailbreaks in next-gen safety systems. Tutorial on QLoRA fine-tuning of IDEFICS3-Llama 8B for visual question answering released. A Chinese open weights model surpasses previous MATH benchmark records. Surveys on Mamba models and LLM-based agents for software engineering highlight advancements and applications. Open-source tools like R2R RAG engine and LlamaIndex Workflows simplify building complex AI applications. Mistral AI introduces customizable AI agents. Concerns raised about California bill SB 1047's focus on existential risk and debates on banning open-source AI. Memes and humor continue in AI communities.
GPT4o August + 100% Structured Outputs for All (GPT4o August edition)
gpt-4o-2024-08-06 llama-3-1-405b llama-3 claude-3.5-sonnet gemini-1.5-pro gpt-4o yi-large-turbo openai meta-ai-fair google-deepmind yi-large nvidia groq langchain jamai langsmith structured-output context-windows model-pricing benchmarking parameter-efficient-expert-retrieval retrieval-augmented-generation mixture-of-experts model-performance ai-hardware model-deployment filtering multi-lingual vision john-carmack jonathan-ross rohanpaul_ai
OpenAI released the new gpt-4o-2024-08-06 model with 16k context window and 33-50% lower pricing than the previous 4o-May version, featuring a new Structured Output API that improves output quality and reduces retry costs. Meta AI launched Llama 3.1, a 405-billion parameter model surpassing GPT-4 and Claude 3.5 Sonnet on benchmarks, alongside expanding the Llama Impact Grant program. Google DeepMind quietly released Gemini 1.5 Pro, outperforming GPT-4o, Claude-3.5, and Llama 3.1 on LMSYS benchmarks and leading the Vision Leaderboard. Yi-Large Turbo was introduced as a cost-effective upgrade priced at $0.19 per million tokens. In hardware, NVIDIA H100 GPUs were highlighted by John Carmack for their massive AI workload power, and Groq announced plans to deploy 108,000 LPUs by Q1 2025. New AI tools and techniques include RAG (Retrieval-Augmented Generation), the JamAI Base platform for Mixture of Agents systems, and LangSmith's enhanced filtering capabilities. Google DeepMind also introduced PEER (Parameter Efficient Expert Retrieval) architecture.
Execuhires: Tempting The Wrath of Khan
gemini-1.5-pro gpt-4o claude-3.5 flux-1 llama-3-1-405b character.ai google adept amazon inflection microsoft stability-ai black-forest-labs schelling google-deepmind openai anthropic meta-ai-fair lmsys langchainai execuhire model-benchmarking multilinguality math coding text-to-image agent-ide open-source-models post-training data-driven-performance noam-shazeer mostafa-mostaque david-friedman rob-rombach alexandr-wang svpino rohanpaul_ai
Character.ai's $2.5b execuhire to Google marks a significant leadership move alongside Adept's $429m execuhire to Amazon and Inflection's $650m execuhire to Microsoft. Despite strong user growth and content momentum, Character.ai's CEO Noam Shazeer returns to Google, signaling shifting vibes in the AI industry. Google DeepMind's Gemini 1.5 Pro tops Chatbot Arena benchmarks, outperforming GPT-4o and Claude-3.5, excelling in multilingual, math, and coding tasks. The launch of Black Forest Labs' FLUX.1 text-to-image model and LangGraph Studio agent IDE highlight ongoing innovation. Llama 3.1 405B is released as the largest open-source model, fostering developer use and competition with closed models. The industry is focusing increasingly on post-training and data as key competitive factors, raising questions about acquisition practices and regulatory scrutiny.
not much happened today
sam-2 gemini-1.5-pro chatgpt midjourney-v6.1 meta-ai-fair google-deepmind scale-ai apple canva hugging-face object-segmentation quantization web-development-framework adversarial-robustness on-device-ai open-source robotics voice vision jeremyphoward demis-hassabis ylecun maartengrootendorst jimfan
Meta released SAM 2, a unified model for real-time object segmentation with a new dataset 4.5x larger and 53x more annotated than previous ones. FastHTML, a new Python web framework by Jeremy Howard, enables easy creation and deployment of interactive web apps. Scale AI launched the SEAL Leaderboard on adversarial robustness, topped by Gemini 1.5 Pro from Google DeepMind. Apple published a technical report on their Intelligence Foundation Language Models for on-device and server use. Yann LeCun emphasized the importance of open source AI in an article co-authored with Martin Casado and Ion Stoica. Maarten Grootendorst's "Visual Guide to Quantization" on efficient LLM inference went viral. ChatGPT started rolling out advanced voice and vision-enabled modes to select users. Leonardo AI was acquired by Canva. Jim Fan shared insights on Project Groot augmenting human demonstration data for robotics. Midjourney v6.1 was released.
Shall I compare thee to a Sonnet's day?
claude-3.5-sonnet claude-3.5 gpt-4o gemini-1.5-pro anthropic lmsys glif comfyui hard-prompts json json-extraction meme-generation instruction-following app-development fusion-energy nuclear-fission productivity fchollet mustafasuleyman
Claude 3.5 Sonnet from Anthropic achieves top rankings in coding and hard prompt arenas, surpassing GPT-4o and competing with Gemini 1.5 Pro at lower cost. Glif demonstrates a fully automated Wojak meme generator using Claude 3.5 for JSON generation and ComfyUI for images, showcasing new JSON extractor capabilities. Artifacts enables rapid creation of niche apps, exemplified by a dual monitor visualizer made in under 5 minutes. François Chollet highlights that fusion energy is not a near-term solution compared to existing nuclear fission plants. Mustafa Suleyman notes that 75% of desk workers now use AI, marking a shift toward AI-assisted productivity.
Gemini launches context caching... or does it?
nemotron llama-3-70b chameleon-7b chameleon-34b gemini-1.5-pro deepseek-coder-v2 gpt-4-turbo claude-3-opus gemini-1.5-pro nvidia meta-ai-fair google deepseek hugging-face context-caching model-performance fine-tuning reinforcement-learning group-relative-policy-optimization large-context model-training coding model-release rohanpaul_ai _philschmid aman-sanger
Nvidia's Nemotron ranks #1 open model on LMsys and #11 overall, surpassing Llama-3-70b. Meta AI released Chameleon 7B/34B models after further post-training. Google's Gemini introduced context caching, offering a cost-efficient middle ground between RAG and finetuning, with a minimum input token count of 33k and no upper limit on cache duration. DeepSeek launched DeepSeek-Coder-V2, a 236B parameter model outperforming GPT-4 Turbo, Claude-3-Opus, and Gemini-1.5-Pro in coding tasks, supporting 338 programming languages and extending context length to 128K. It was trained on 6 trillion tokens using the Group Relative Policy Optimization (GRPO) algorithm and is available on Hugging Face with a commercial license. These developments highlight advances in model performance, context caching, and large-scale coding models.
Contextual Position Encoding (CoPE)
cope gemini-1.5-flash gemini-1.5-pro claude gpt-3 meta-ai-fair google-deepmind anthropic perplexity-ai langchain openai positional-encoding transformers counting copying language-modeling coding external-memory tool-use model-evaluation inference-speed model-benchmarking scaling research-synthesis jason-weston alexandr-wang karpathy arav-srinivas
Meta AI researcher Jason Weston introduced CoPE, a novel positional encoding method for transformers that incorporates context to create learnable gates, enabling improved handling of counting and copying tasks and better performance on language modeling and coding. The approach can potentially be extended with external memory for gate calculation. Google DeepMind released Gemini 1.5 Flash and Pro models optimized for fast inference. Anthropic announced general availability of tool use for Claude, enhancing its ability to orchestrate tools for complex tasks. Alexandr Wang launched SEAL Leaderboards for private, expert evaluations of frontier models. Karpathy reflected on the 4th anniversary of GPT-3, emphasizing scaling and practical improvements. Perplexity AI launched Perplexity Pages to convert research into visually appealing articles, described as an "AI Wikipedia" by Arav Srinivas.
1 TRILLION token context, real time, on device?
gemini-1.5-pro gemini-1.5 cartesia mistral-ai scale-ai state-space-models voice-models multimodality model-performance on-device-ai long-context evaluation-leaderboards learning-rate-optimization scientific-publishing research-vs-engineering yann-lecun elon-musk
Cartesia, a startup specializing in state space models (SSMs), launched a low latency voice model outperforming transformer-based models with 20% lower perplexity, 2x lower word error, and 1 point higher NISQA quality. This breakthrough highlights the potential for models that can continuously process and reason over massive streams of multimodal data (text, audio, video) with a trillion token context window on-device. The news also covers recent AI developments including Mistral's Codestral weights release, Schedule Free optimizers paper release, and Scale AI's new elo-style eval leaderboards. Additionally, a debate between yann-lecun and elon-musk on the importance of publishing AI research versus engineering achievements was noted. The Gemini 1.5 Pro/Advanced models were mentioned for their strong performance.
Skyfall
gemini-1.5-pro gemini-1.5-flash yi-1.5 kosmos-2.5 paligemma falcon-2 deepseek-v2 hunyuan-dit gemini-1.5 gemini-1.5-flash yi-1.5 google-deepmind yi-ai microsoft hugging-face langchain maven multimodality mixture-of-experts transformer model-optimization long-context model-performance model-inference fine-tuning local-ai scaling-laws causal-models hallucination-detection model-distillation model-efficiency hamel-husain dan-becker clement-delangue philschmid osanseviero arankomatsuzaki jason-wei rohanpaul_ai
Between 5/17 and 5/20/2024, key AI updates include Google DeepMind's Gemini 1.5 Pro and Flash models, featuring sparse multimodal MoE architecture with up to 10M context and a dense Transformer decoder that is 3x faster and 10x cheaper. Yi AI released Yi-1.5 models with extended context windows of 32K and 16K tokens. Other notable releases include Kosmos 2.5 (Microsoft), PaliGemma (Google), Falcon 2, DeepSeek v2 lite, and HunyuanDiT diffusion model. Research highlights feature an Observational Scaling Laws paper predicting model performance across families, a Layer-Condensed KV Cache technique boosting inference throughput by up to 26×, and the SUPRA method converting LLMs into RNNs for reduced compute costs. Hugging Face expanded local AI capabilities enabling on-device AI without cloud dependency. LangChain updated its v0.2 release with improved documentation. The community also welcomed a new LLM Finetuning Discord by Hamel Husain and Dan Becker for Maven course users. "Hugging Face is profitable, or close to profitable," enabling $10 million in free shared GPUs for developers.
Not much happened today
gpt-4o gemini-1.5-pro gemini-1.5-flash imagen-3 veo reka-core qwen-1.5-110b openai google-deepmind anthropic rekailabs alibaba salesforce multimodality long-context model-releases reinforcement-learning model-benchmarking text-to-image video-generation ai-assistants ilya-sutskever jakub-pachocki mike-krieger sama
Ilya Sutskever steps down as Chief Scientist at OpenAI after nearly a decade, with Jakub Pachocki named as his successor. Google DeepMind announces Gemini 1.5 Pro and Gemini 1.5 Flash models featuring 2 million token context and improved multimodal capabilities, alongside demos of Project Astra AI assistant, Imagen 3 text-to-image model, and Veo generative video model. GPT-4o tops the VHELM leaderboard and outperforms competitors on LMSYS Chatbot Arena. Reka Core multimodal model with 128K context and Alibaba's Qwen1.5-110B open-source model are released. Salesforce shares an online RLHF recipe.
Google I/O in 60 seconds
gemini-1.5-pro gemini-flash gemini-ultra gemini-pro gemini-nano gemma-2 llama-3-70b paligemma imagen-3 veo google google-deepmind youtube tokenization model-performance fine-tuning vision multimodality model-release model-training model-optimization ai-integration image-generation watermarking hardware-optimization voice video-understanding
Google announced updates to the Gemini model family, including Gemini 1.5 Pro with 2 million token support, and the new Gemini Flash model optimized for speed with 1 million token capacity. The Gemini suite now includes Ultra, Pro, Flash, and Nano models, with Gemini Nano integrated into Chrome 126. Additional Gemini features include Gemini Gems (custom GPTs), Gemini Live for voice conversations, and Project Astra, a live video understanding assistant. The Gemma model family was updated with Gemma 2 at 27B parameters, offering near-llama-3-70b performance at half the size, plus PaliGemma, a vision-language open model inspired by PaLI-3. Other launches include DeepMind's Veo, Imagen 3 for photorealistic image generation, and a Music AI Sandbox collaboration with YouTube. SynthID watermarking now extends to text, images, audio, and video. The Trillium TPUv6 codename was revealed. Google also integrated AI across its product suite including Workspace, Email, Docs, Sheets, Photos, Search, and Lens. "The world awaits Apple's answer."
Gemini Pro and GPT4T Vision go GA on the same day by complete coincidence
gemini-1.5-pro gpt-4-turbo llama-3 orca-2.5-7b functionary-v2.4 cosxl google openai meta-ai-fair hugging-face cohere million-token-context-window audio-processing file-api text-embedding function-calling reasoning direct-nash-optimization contrastive-learning code-interpreter diffusion-models neural-odes inference-speed multilingual-dataset image-editing no-code-development
At Google Cloud Next, Gemini 1.5 Pro was released with a million-token context window, available in 180+ countries, featuring 9.5 hours of audio understanding, a new File API for nearly unlimited free uploads, and the Gecko-1b-256/768 embedding model. GPT-4 Turbo with Vision became generally available in the API with a major update improving reasoning capabilities. Meta Platforms plans to launch smaller versions of Llama 3 next week. The Orca 2.5 7B model using Direct Nash Optimization outperforms older GPT-4 versions in AlpacaEval. New releases include Functionary-V2.4 with enhanced function calling and code interpretation, and CosXL models for image editing. Research highlights include continuous U-Nets for diffusion models achieving up to 80% faster inference and a massive multilingual dataset with ~5.6 trillion word tokens. Creative applications include a no-code touch screen game made with Gemini 1.5 and AI-generated novel trailers.
12/16/2023: ByteDance suspended by OpenAI
claude-2.1 gpt-4-turbo gemini-1.5-pro gpt-5 gpt-4.5 gpt-4 openai google-deepmind anthropic hardware gpu api-costs coding model-comparison subscription-issues payment-processing feature-confidentiality ai-art-generation organizational-productivity model-speculation
The OpenAI Discord community discussed hardware options like Mac racks and the A6000 GPU, highlighting their value for AI workloads. They compared Claude 2.1 and GPT 4 Turbo on coding tasks, with GPT 4 Turbo outperforming Claude 2.1. The benefits of the Bard API for gemini pro were noted, including a free quota of 60 queries per minute. Users shared experiences with ChatGPT Plus membership issues, payment problems, and speculated about the upcoming GPT-5 and the rumored GPT-4.5. Discussions also covered the confidentiality of the Alpha feature, AI art generation policies, and improvements in organizational work features. The community expressed mixed feelings about GPT-4's performance and awaited future model updates.