Frozen AI News archive

OpenAI takes on Gemini's Deep Research

**OpenAI** released the full version of the **o3** agent, with a new **Deep Research** variant showing significant improvements on the **HLE benchmark** and achieving SOTA results on **GAIA**. The release includes an "inference time scaling" chart demonstrating rigorous research, though some criticism arose over public test set results. The agent is noted as "extremely simple" and currently limited to 100 queries/month, with plans for a higher-rate version. Reception has been mostly positive, with some skepticism. Additionally, advances in **reinforcement learning** were highlighted, including a simple test-time scaling technique called **budget forcing** that improved reasoning on math competitions by 27%. Researchers from **Google DeepMind**, **NYU**, **UC Berkeley**, and **HKU** contributed to these findings. The original **Gemini Deep Research** team will participate in the upcoming AI Engineer NYC event.

Canonical issue URL

AI News for 1/31/2025-2/3/2025. We checked 7 subreddits, 433 Twitters and 34 Discords (225 channels, and 16942 messages) for you. Estimated reading time saved (at 200wpm): 1721 minutes. You can now tag @smol_ai for AINews discussions!

When introducing Operator (our coverage here), sama hinted at more OpenAI Agents soon on the way, but few of us were expecting the next one in 9 days, shipped from Japan on a US Sunday no less:

https://www.youtube.com/watch?v=YkCDVn3_wiw

The blogpost offers more insight into intended usecases, but the bit notable is Deep Research's result on Dan Hendrycks' new HLE benchmark more than doubling the result of o3-mini-high released just on Friday (our coverage here).

image.png

They also released a SOTA result on GAIA - which was criticized by coauthors for just releasing public test set results - obviously problematic for an agent that can surf the web, though there is zero reason to question the integrity of this especially when confirmed in footnotes and as samples of the GAIA test traces were published.

OAIDR comes with its own version of the "inference time scaling" chart which is very impressive - not in the scaling of the chart itself, but in the clear rigor demonstrated in the research process that made producing such a chart possible (assuming, of course, that this is research, not marketing, but here the lines are unfortunately blurred to sell a $200/month subscription).

image.png

image.png

OpenAI staffers confirmed that this is the first time the full o3 has been released in the wild (and gdb says it is "an extremely simple agent"), and the blogpost notes that a "o3-deep-research-mini" version is on the way which will raise rate limits from the 100 queries/month available today.

Reception has been mostly positive, sometimes to the point of hyperventilation. Some folks are making fun of the hyperbole, but on balance we tend to agree with the positive takes of Ethan Mollick and Dan Shipper, though we do experience a lot of failures as well.


Shameless Plug: We will have multiple Deep Research and other agent builders, including the original Gemini Deep Research team, at AI Engineer NYC on Feb 20-22. Last call for applicants!


{% if medium == 'web' %}

Table of Contents

[TOC]

{% else %}

The Table of Contents and Channel Summaries have been moved to the web version of this email: [{{ email.subject }}]({{ email_url }})!

{% endif %}


AI Twitter Recap

Advances in Reinforcement Learning (RL) and AI Research

OpenAI's Deep Research and Reasoning Models

Developments in Qwen Models and AI Advancements

AI Safety and Defending Against Jailbreaks

AI Tools and Platforms for Developers

Memes and Humor


AI Reddit Recap

/r/LocalLlama Recap

Theme 1. Paradigm Shift in AI Model Hardware: From GPUs to CPU+RAM

Theme 2. Rise of Mistral, Qwen, and DeepSeek outside the USA

Theme 3. Phi 4 Model Gaining Traction for Underserved Hardware

Theme 4. DeepSeek-R1's Competence in Complex Problem Solving

Other AI Subreddit Recap

/r/Singularity, /r/Oobabooga, /r/MachineLearning, /r/OpenAI, /r/ClaudeAI, /r/StableDiffusion, /r/ChatGPT

Theme 1. DeepSeek and Deep Research: Disruptive AI Challenges

Theme 2. OpenAI's New Hardware Initiatives with Jony Ive

Theme 3. Critique on AI Outperforming Human Expertise Claims


AI Discord Recap

A summary of Summaries of Summaries by Gemini 2.0 Flash Thinking (gemini-2.0-flash-thinking-exp)

Theme 1. DeepSeek AI's Ascendancy and Regulatory Scrutiny

Theme 2. OpenAI's o3-mini: Performance and Public Scrutiny

Theme 3. AI Tooling and IDEs: Winds of Change

Theme 4. LLM Training and Optimization: New Techniques Emerge

Theme 5. Hardware Hurdles and Horizons


PART 1: High level Discord summaries

Unsloth AI (Daniel Han) Discord


Codeium (Windsurf) Discord


aider (Paul Gauthier) Discord


Cursor IDE Discord


Yannick Kilcher Discord


LM Studio Discord


OpenAI Discord


Nous Research AI Discord


Interconnects (Nathan Lambert) Discord


Latent Space Discord


Eleuther Discord


MCP (Glama) Discord


Stackblitz (Bolt.new) Discord


Nomic.ai (GPT4All) Discord


Notebook LM Discord Discord


Modular (Mojo 🔥) Discord


Torchtune Discord


LLM Agents (Berkeley MOOC) Discord


tinygrad (George Hotz) Discord


Cohere Discord


LlamaIndex Discord


DSPy Discord


LAION Discord


Axolotl AI Discord


OpenInterpreter Discord


MLOps @Chipro Discord


Mozilla AI Discord


The HuggingFace Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The Gorilla LLM (Berkeley Function Calling) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.


PART 2: Detailed by-Channel summaries and links

{% if medium == 'web' %}

Unsloth AI (Daniel Han) ▷ #general (1121 messages🔥🔥🔥):

Unsloth Framework, DeepSeek R1, Batch Inference, Legal Considerations for AI Training, LLM Performance

Links mentioned:


Unsloth AI (Daniel Han) ▷ #off-topic (262 messages🔥🔥):

AMD vs Nvidia in LLMs, Deepseek Optimization Issues, Fine-Tuning Small LLMs, Performance of Custom LLMs, Date Time Parsing with LLMs

Link mentioned: Reddit - Dive into anything: no description found


Unsloth AI (Daniel Han) ▷ #help (445 messages🔥🔥🔥):

Unsloth and dynamic quantization, Using Ollama with custom models, Gradient accumulation in model training, Batch inference with FastLanguageModel, Model compatibility across different environments

Links mentioned:


Unsloth AI (Daniel Han) ▷ #showcase (6 messages):

DeepSeek-R1, Klarity Library, Fine-Tuning LLMs, OpenWebUI Integration, Local Model Running

Links mentioned:


Unsloth AI (Daniel Han) ▷ #research (75 messages🔥🔥):

VLLM Offloading with GGUF, Dynamic Quantization for Inferencing, DeepSeek R1 Performance, Test Time Compute Strategies, Horizontal vs Vertical Distillation

Links mentioned:


Codeium (Windsurf) ▷ #announcements (1 messages):

Windsurf 1.2.5 Update, Cascade web search features

Link mentioned: Windsurf Editor Changelogs | Windsurf Editor and Codeium extensions: Latest updates and changes for the Windsurf Editor.


Codeium (Windsurf) ▷ #discussion (306 messages🔥🔥):

DeepSeek Models, Windsurf Pricing and Discounts, Codeium Extensions vs Windsurf, JetBrains Plugin Usage, Model Performance Comparisons

Links mentioned:


Codeium (Windsurf) ▷ #windsurf (657 messages🔥🔥🔥):

Windsurf Issues, Model Performance Comparison, Cascade Functionality, User Experience, Feedback and Support

Links mentioned:


aider (Paul Gauthier) ▷ #announcements (1 messages):

Aider v0.73.0 Release, Context Window Improvements, OpenRouter R1 Support, Model-Specific Reasoning Tags, Code Contribution Stats

Link mentioned: Release history: Release notes and stats on aider writing its own code.


aider (Paul Gauthier) ▷ #general (741 messages🔥🔥🔥):

O3 Mini Performance, Sonnet vs. O3 Mini, MCP Tools, Deep Research, AI Tool Preferences

Links mentioned:


aider (Paul Gauthier) ▷ #questions-and-tips (112 messages🔥🔥):

DeepSeek R1 and Sonnet, Using Aider with external files, API access issues and tier upgrades, Self-hosting LLMs, Configuration management in Aider

Links mentioned:


aider (Paul Gauthier) ▷ #links (14 messages🔥):

Cursor system prompts, Windsurf IDE features, Inline prompting usage, OpenRouter AI web search, Code collaboration with Aider

Links mentioned:


Cursor IDE ▷ #general (768 messages🔥🔥🔥):

O3 Mini Performance, Claude 3.5 Sonnet vs. O3 Mini, Cursor Updates, Meta Prompting Techniques

Links mentioned:


Yannick Kilcher ▷ #general (707 messages🔥🔥🔥):

DeepSeek and AI Regulation, LLM Training and Data Usage, AI Research Funding in the EU and Canada, SFT and RL in AI Models, OpenEuroLLM Project

Links mentioned:


Yannick Kilcher ▷ #paper-discussion (36 messages🔥):

Math performance of LLMs, Self-Other Overlap fine-tuning, Perceptions of OpenAI's models, Development of DeepSeek models, Critiques of AI reasoning capabilities

Links mentioned:


Yannick Kilcher ▷ #agents (4 messages):

O3-mini Autonomy Model, AI News and Updates

Link mentioned: o3-mini is the FIRST DANGEROUS Autonomy Model | INSANE Coding and ML Abilities: The latest AI News. Learn about LLMs, Gen AI and get ready for the rollout of AGI. Wes Roth covers the latest happenings in the world of OpenAI, Google, Anth...


Yannick Kilcher ▷ #ml-news (53 messages🔥):

OpenAI's government contracts, DeepSeek AI model, AI copyright laws, DeepResearch alternative, Legislative actions on AI

Links mentioned:


LM Studio ▷ #general (600 messages🔥🔥🔥):

DeepSeek Models, Multi-Agent Live Chatroom, LM Studio Usage, GPU Utilization, AI in Genealogy Research

Links mentioned:


LM Studio ▷ #hardware-discussion (210 messages🔥🔥):

LM Studio setup with hardware specifications, Comparison of GPUs for AI inference, Tool Calls in AI models, Performance of AMD GPUs, Using local AI models for various tasks

Links mentioned:


OpenAI ▷ #annnouncements (3 messages):

OpenAI o3-mini AMA, Deep Research Agent Launch

Link mentioned: Reddit - Dive into anything: no description found


OpenAI ▷ #ai-discussions (520 messages🔥🔥🔥):

DeepSeek R1 performance, OpenAI context limits, AI model comparisons, Distilled AI models, ChatGPT pro features

Links mentioned:


OpenAI ▷ #gpt-4-discussions (119 messages🔥🔥):

o3 Mini Release and Usage Limits, Model Performance Concerns, GPT Models and Features, User Experience with ChatGPT, AI in Children's Literature


OpenAI ▷ #prompt-engineering (29 messages🔥):

O-model prompt structuring, Conlang development, Model performance discussion, Redundancy in model prompts


OpenAI ▷ #api-discussions (29 messages🔥):

Conlang Development with AI, O-models Processing, Prompt Structuring Challenges, Redundancy and Clarity in Prompts, Zero-shot Prompt Techniques


Nous Research AI ▷ #general (505 messages🔥🔥🔥):

Psyche AI Development, OpenAI and DeepSeek, Legal Considerations in AI, DeepSeek's Advancements, Job Opportunities in AI

Links mentioned:


Nous Research AI ▷ #ask-about-llms (12 messages🔥):

CLIP with Hermes 3 Llama 3.2 3B, Difference between llama.cpp and llama 3.2, Ollama as an inference engine, Training models for academic purposes


Nous Research AI ▷ #research-papers (18 messages🔥):

Weekend Plans, Research Paper Reading Habits, Scite Platform for Research, Deep Gradient Compression, Stanford's Simple Test-Time Scaling

Links mentioned:


Nous Research AI ▷ #interesting-links (16 messages🔥):

Anna's Archive and DeepSeek Impact, Political AI Agent by Society Library, Data Scarcity vs. Copyright Issues, Community Engagement in AI Model Testing, Graphical Tensor Notation in Deep Learning

Links mentioned:


Nous Research AI ▷ #research-papers (18 messages🔥):

Weekend Plans, Paper Reading Habits, Scite Research Platform, Deep Gradient Compression, Stanford's Simple Test-Time Scaling

Links mentioned:


Nous Research AI ▷ #reasoning-tasks (5 messages):

Relign Open-Sourced RL Library, Distributed Training Session, Community Contributions


Interconnects (Nathan Lambert) ▷ #news (228 messages🔥🔥):

Deep Research, SoftBank and OpenAI partnership, Crystal Intelligence model, LLM productivity impacts, Gemini Deep Research limitations

Links mentioned:


Interconnects (Nathan Lambert) ▷ #ml-questions (19 messages🔥):

SmolLM Team's Response, Human Data Space Exploration, Reinforcement Learning Challenges, Use of HF Accelerate vs. Torchrun


Interconnects (Nathan Lambert) ▷ #ml-drama (28 messages🔥):

O3-mini System Card Confusion, DeepSeek's Open Source Impact, Anthropic's Challenge, Wikipedia's Role in AI, Issues with Jailbreak Progress

Links mentioned:


Interconnects (Nathan Lambert) ▷ #random (197 messages🔥🔥):

OpenAI's O3, Deep Research performance comparisons, Research agent advancements, RLHF and model training, CoT and AI policies

Links mentioned:


Interconnects (Nathan Lambert) ▷ #memes (32 messages🔥):

HF_ENABLE_FAST_TRANSFER, Bengali Ghosthunters, TechCrunch's Meme Game, Economic Value Charts, RLHF vs Reasoning Models

Links mentioned:


Interconnects (Nathan Lambert) ▷ #rl (7 messages):

Funding Issues, GRPO and RLVR, Demos, DeepSeek

Link mentioned: Alexander Doria (@dorialexander.bsky.social): In case it interests anyone, I managed to set up a demo of GRPO RL training in Colab. It’s an adaptation of Will Brown instant classic for math reasoning. Replace llama 1B with qwen 0.5b and inference...


Interconnects (Nathan Lambert) ▷ #rlhf (10 messages🔥):

DeepSeek AI R1 model, AI as a science discussion, Thinking models in AI, NeurIPs talk on post-training, R1 training parameters

Link mentioned: DeepSeek R1's recipe to replicate o1 and the future of reasoning LMs: Yes, ring the true o1 replication bells for DeepSeek R1 🔔🔔🔔. Where we go next.


Interconnects (Nathan Lambert) ▷ #reads (8 messages🔥):

Creator Gravity, AI Self-Assessment, Rejection in Writing Jobs, Sander Land's Substack Commentary

Links mentioned:


Interconnects (Nathan Lambert) ▷ #policy (23 messages🔥):

Proposed AI Legislation, Shadow Libraries and AI, Foxconn Tariffs, AI Research Collaboration Restrictions

Links mentioned:


Latent Space ▷ #ai-general-chat (181 messages🔥🔥):

Deep Research Launch, OpenAI Agent Discussions, AI Model Developments, LLM Competition and Internal Conflicts, Reasoning Augmented Generation (ReAG)

Links mentioned:


Latent Space ▷ #ai-announcements (2 messages):

AI Engineer Summit, Karina Nguyen Keynote, New Online Track

Links mentioned:


Latent Space ▷ #ai-in-action-club (270 messages🔥🔥):

Discord screen sharing issues, AI tutoring concepts, Deepseek API discussions, Open source AI tools, Cline vs RooCline

Links mentioned:


Eleuther ▷ #announcements (1 messages):

Probability of Random Language Model Weights, Volume Hypothesis in Deep Learning, Importance Sampling in High Dimensions, Network Complexity and Alignment

Links mentioned:


Eleuther ▷ #general (104 messages🔥🔥):

Reproduction of R1 results, Censorship in Language Models, Mixture of Experts (MoE), DeepSeek's behavior, Community engagement in AI

Links mentioned:


Eleuther ▷ #research (119 messages🔥🔥):

DeepSeek Math paper metrics, DRAW architecture for image generation, Learning-rate schedules in optimization, Distillation processes in model training, Complexity measures in neural networks

Links mentioned:


Eleuther ▷ #interpretability-general (26 messages🔥):

New paper by David Chalmers, Crosscoder repositories, Sparse autoencoders optimization, Expert evaluation in MoE models

Links mentioned:


Eleuther ▷ #lm-thunderdome (5 messages):

Non-overlapping windows, make_disjoint_window modification, Chunked prefill, Data storage in scripts.write_out

Link mentioned: lm-evaluation-harness/lm_eval/models/vllm_causallms.py at 0bb8406f2ebfe074cf173c333bdcd6cffb17279b · EleutherAI/lm-evaluation-harness: A framework for few-shot evaluation of language models. - EleutherAI/lm-evaluation-harness


Eleuther ▷ #gpt-neox-dev (16 messages🔥):

NeoX Performance, Fusion Flags, Transformer Engine Speedups, Scaling Softmax Functions, Error with Detect NVLink Pairs Flag

Links mentioned:


MCP (Glama) ▷ #general (219 messages🔥🔥):

Remote MCP Tools, Discord Server Confusion, Superinterface Products, Load Balancing Using Litellm Proxy, Open-Source Alternatives

Links mentioned:


MCP (Glama) ▷ #showcase (14 messages🔥):

MCP Server Projects, Zed Extensions, Goose Automation, Supergateway v2, FFmpeg Speed Adjustments

Links mentioned:


Stackblitz (Bolt.new) ▷ #prompting (6 messages):

Stripe Payment Issues, User Stories Documentation, Zapier Workaround for User Tiers, Upcoming Office Hours


Stackblitz (Bolt.new) ▷ #discussions (223 messages🔥🔥):

Bolt Performance Issues, Supabase vs Firebase, Connecting to Supabase, Iframe Issues with Calendly, User Authentication Issues

Links mentioned:


Nomic.ai (GPT4All) ▷ #general (189 messages🔥🔥):

GPT4All Bug Reports, Quantization and Model Efficiency, Data Privacy Concerns, LaTeX Support in AI Models, NSFW Story Generation with LLMs

Links mentioned:


Notebook LM Discord ▷ #use-cases (13 messages🔥):

NotebookLM for JS Interviews, Google Workspace Standard Account, NBLM in BPO Environment, Leveraging NBLM for Language Learning, Podcast Announcement

Link mentioned: Chrome Web Store: Add new features to your browser and personalize your browsing experience.


Notebook LM Discord ▷ #general (104 messages🔥🔥):

NotebookLM functionality, Language settings, Audio customization, API release, AI models and capabilities

Links mentioned:


Modular (Mojo 🔥) ▷ #general (6 messages):

Mojo and MAX solutions, Broken Mojo Examples link, Community Mojo Examples, Modular examples page update

Link mentioned: Community Showcase: Community projects that use MAX and Mojo


Modular (Mojo 🔥) ▷ #mojo (49 messages🔥):

Complexity in Mojo vs Swift, Mojo for Programming Education, Challenges with Mojo's Type System, Community Feedback on Mojo 1.0, Hot Reloading System for Mojo


Modular (Mojo 🔥) ▷ #max (41 messages🔥):

MAX Serving Infrastructure, Ollama Performance Comparison, Memory Usage in LLMs, Weight Path Issues, DeepSeek R1 Model Performance

Links mentioned:


Torchtune ▷ #general (32 messages🔥):

GRPO on multiple nodes, SFT without message structure, Custom dataset class considerations, Hijacking SFTDataset transforms

Links mentioned:


Torchtune ▷ #dev (32 messages🔥):

Multinode Support in Torchtune, DPO Recipe Seed Issue, Normalization in DPO Loss, Gradient Accumulation Fix, DataLoader and Seed Consistency

Links mentioned:


Torchtune ▷ #papers (2 messages):

Data Augmentation in LLMs, R1-V Model Introduction

Links mentioned:


LLM Agents (Berkeley MOOC) ▷ #mooc-announcements (1 messages):

Lecture with Jason Weston, Self-Improvement Methods in LLMs, Jason Weston Background


LLM Agents (Berkeley MOOC) ▷ #mooc-questions (51 messages🔥):

Quiz Completion Confusion, MOOC Project Participation, Certification Queries, Mailing List Confirmation, Hackathon Results Update

Link mentioned: Advanced Large Language Model Agents MOOC: MOOC, Spring 2025


LLM Agents (Berkeley MOOC) ▷ #mooc-lecture-discussion (8 messages🔥):

Quiz availability, DeepSeek R1 vs PEFT, Email alerts for quizzes, Study session on Reasoning techniques, Course website navigation

Link mentioned: Advanced Large Language Model Agents MOOC: MOOC, Spring 2025


tinygrad (George Hotz) ▷ #general (33 messages🔥):

PR Handling, Video Decoding with NVDEC, WebGPU Autogen Progress, LLVM and Clang Usage in Linux Distros

Links mentioned:


tinygrad (George Hotz) ▷ #learn-tinygrad (3 messages):

HCQ Execution Paradigm, CPU P2P Transfer Mechanisms, Math Trait Refactor, Multigpu Execution Strategies

Link mentioned: Comparing tinygrad:master...davidjanoskyrepo:math_trait_refactor · tinygrad/tinygrad: You like pytorch? You like micrograd? You love tinygrad! ❤️ - Comparing tinygrad:master...davidjanoskyrepo:math_trait_refactor · tinygrad/tinygrad


Cohere ▷ #discussions (17 messages🔥):

Cohere trial key limits, Command-R+ model performance, Account auto logout issues


Cohere ▷ #api-discussions (9 messages🔥):

Embed API v2.0 errors, Command R and Japanese translations


Cohere ▷ #cohere-toolkit (4 messages):

Limitations of LLMs in Math, ASLLM - Application Specific Language Models


LlamaIndex ▷ #blog (6 messages):

LlamaReport, o3-mini support, SciAgents, PDF to PPT Generator, Contextual Retrieval

Link mentioned: GitHub - lesteroliver911/ai-pdf-ppt-generator-openai: A fun project where I use the power of AI to analyze a PDF. The AI extracts key information based on the user's instructions and selections (see the UI demo). The user then gets a second screen to edit the slides before downloading the final PPT. Simple, fast, and powered by AI to make creating presentations a breeze!: A fun project where I use the power of AI to analyze a PDF. The AI extracts key information based on the user's instructions and selections (see the UI demo). The user then gets a second scree...


LlamaIndex ▷ #general (19 messages🔥):

Deepseek vs OpenAI, Auto-Retrieval from Vector Database, Testing Chunking Strategies, Token Cost with Structured Output, Managing Memory for Multiple Users

Links mentioned:


LlamaIndex ▷ #ai-discussion (1 messages):

Deepseek vs OpenAI, Audio Narration Technology

Link mentioned: no title found: no description found


DSPy ▷ #show-and-tell (1 messages):

DeepSeek Perspectives, Power Objects in AI, AI Boosters vs Skeptics, Open Source vs Proprietary Development, AI Doomsday Concerns

Link mentioned: DeepSeek as a Power Object: The wave of DeepSeek takes reveal more about our own hopes and concerns than they do about DeepSeek.


DSPy ▷ #papers (1 messages):

SAEs performance, LLM steering methods

Link mentioned: Tweet from KZ is in London (@kzSlider): Damn, triple-homicide in one day. SAEs really taking a beating recently


DSPy ▷ #general (13 messages🔥):

Typed Predictors in DSPy 2.6, Mixing Chain-of-Thought with R1 Models, Streaming Outputs in DSPy, Error with Importing in DSPy

Links mentioned:


LAION ▷ #general (10 messages🔥):

OpenEuroLLM, EU Commission AI Initiative, Research Project Challenges

Links mentioned:


LAION ▷ #research (4 messages):

CV Research Collaboration, R1-Llama and R1-Qwen Evaluation, DeepSeek Model Specifications

Link mentioned: Tweet from Jenia Jitsev 🏳️‍🌈 🇺🇦 🇮🇱 (@JJitsev): DeepSeek R1 Distilled Llama 70B & Qwen 32B models claim to solve olympiad level math & coding problems, matching o1-mini which claims same. Can they handle versions of AIW problems that reveal general...


Axolotl AI ▷ #general (3 messages):

Fine-tuning reasoning models, GRPO Colab notebook

Link mentioned: Google Colab: no description found


OpenInterpreter ▷ #general (3 messages):

o3-mini compatibility, Open Interpreter changes


MLOps @Chipro ▷ #events (2 messages):

Cursor AI as Development Tool, Honor of Kings Market Transactions

Link mentioned: Awesome AI Tool - Use Cursor Like a Professional · Zoom · Luma: Do you want to learn about how to use Cursor AI like a pro?🚀 Our guest speaker Arnold will share how he became a 10X CTO through mastering Cursor.We'll…


Mozilla AI ▷ #announcements (1 messages):

Lumigator Live Demo, Firefox AI Platform, Blueprints Update, Builders Demo Day Pitches




{% else %}

The full channel by channel breakdowns have been truncated for email.

If you want the full breakdown, please visit the web version of this email: [{{ email.subject }}]({{ email_url }})!

If you enjoyed AInews, please share with a friend! Thanks in advance!

{% endif %}