All tags
Person: "yann-lecun"
not much happened today
gemini-2.0-flash-thinking command-a qwq-32b gemma-3-27b gemma-3 shieldgemma-2 llama-3-70b deepseek-r1 o1-mini deepseek-v3 google-deepmind cohere meta-ai-fair alibaba hugging-face model-updates model-performance benchmarking reinforcement-learning transformers normalization-layers image-generation vision memory-efficiency context-windows fine-tuning yann-lecun
Google DeepMind announced updates to Gemini 2.0, including an upgraded Flash Thinking model with stronger reasoning and native image generation capabilities. Cohere launched Command A, a 111B parameter dense model with a 256K context window and competitive pricing, available on Hugging Face. Meta AI proposed Dynamic Tanh (DyT) as a replacement for normalization layers in Transformers, supported by Yann LeCun. Alibaba released QwQ-32B, a 32.5B parameter model excelling in math and coding, fine-tuned with reinforcement learning and freely available under Apache 2.0 license. Google DeepMind also released Gemma 3 models ranging from 1B to 27B parameters with a 128K token context window and over 140 language support, plus ShieldGemma 2, an image safety checker. Benchmarking shows Gemma 3 27B has strong vision and memory efficiency but is outperformed by larger models like Llama 3.3 70B and DeepSeek V3 671B. The Hugging Face LLM leaderboard history was shared by @_lewtun.
not much happened today
deepseek-r1 deepseek-v3 coder-v2 prover deepseek hugging-face dell openai instruction-tuning performance-benchmarks model-deployment training-costs hardware-scalability ai-safety risk-mitigation ethical-ai open-source gpu-utilization yann-lecun yoshua-bengio francois-chollet giffman
DeepSeek-R1 and DeepSeek-V3 models have made significant advancements, trained on an instruction-tuning dataset of 1.5M samples with 600,000 reasoning and 200,000 non-reasoning SFT data. The models demonstrate strong performance benchmarks and are deployed on-premise via collaborations with Dell and Hugging Face. Training costs are estimated around $5.5M to $6M, with efficient hardware utilization on 8xH100 servers. The International AI Safety Report highlights risks such as malicious use, malfunctions, and systemic risks including AI-driven cyberattacks. Industry leaders like Yann LeCun and Yoshua Bengio provide insights on market reactions, AI safety, and ethical considerations, with emphasis on AI's role in creativity and economic incentives.
DeepSeek-R1 claims to beat o1-preview AND will be open sourced
deepseek-r1-lite-preview o1-preview hopper blackwell alphaqubit deepseek nvidia google-deepmind reasoning benchmarking quantum-error-correction quantum-computing model-performance model-release yann-lecun
DeepSeek has released DeepSeek-R1-Lite-Preview, an open-source reasoning model achieving o1-preview-level performance on math benchmarks with transparent thought processes, showing promise in real-time problem-solving. NVIDIA reported a record $35.1 billion revenue in Q3 with 112% year-on-year data center growth, driven by Hopper and Blackwell architectures, the latter offering 2.2x performance improvement. Google DeepMind introduced AlphaQubit, a quantum computing system improving error correction and outperforming leading decoders, though challenges remain in scaling and speed. The AI community continues to focus on reasoning models, benchmarking, and quantum error correction advancements.
not much happened today
llama mistral openai decagon sierra togethercompute vertical-saas funding protein-structure-prediction lora self-supervised-learning model-optimization neural-architecture-search model-evaluation ethics transformers multi-agent-systems long-context mira-murati demis-hassabis clement-delangue john-o-whitaker yann-lecun francois-chollet ajeya-cotra rohan-paul adcock-brett
Vertical SaaS agents are gaining rapid consensus as the future of AI applications, highlighted by Decagon's $100m funding and Sierra's $4b round. OpenAI alumni are actively raising venture capital and forming new startups, intensifying competition in the AI market. Demis Hassabis celebrated the Nobel Prize recognition for AlphaFold2, a breakthrough in protein structure prediction. Advances in AI models include techniques like LoRA projectors and annealing on high-quality data, while discussions emphasize the need for high-bandwidth sensory inputs beyond language for common sense learning. New methods like LoLCATs aim to optimize transformer models such as Llama and Mistral for efficiency. Ethical concerns about AI agents performing harmful tasks remain under investigation. The AI community continues to explore model evaluation challenges and optimization frameworks like LPZero for neural architecture search.
Not much (in AI) happened this weekend
llama-3.1-8b llama-3.2 chatgpt movie-gen openai meta-ai-fair google-deepmind microsoft x-ai spacex harvard nvidia long-context feature-prediction-loss ai-agents privacy text-to-video text-to-image humanoid-robots gpu-deployment media-foundation-models ai-research-labs sam-altman yann-lecun rasbt bindureddy andrej-karpathy soumithchintala svpino adcock_brett rohanpaul_ai
OpenAI introduced an "edit this area" feature for image generation, praised by Sam Altman. Yann LeCun highlighted a NYU paper improving pixel generation with feature prediction loss using pre-trained visual encoders like DINOv2. Long-context LLMs such as llama-3.1-8b and llama-3.2 variants now support up to 131k tokens, offering alternatives to RAG systems. Bindu Reddy announced AI agents capable of building and deploying code from English instructions, signaling AI's replacement of SQL and potential impact on Python. SpaceX's successful Starship rocket catch was celebrated by Andrej Karpathy and others, with Soumith Chintala praising SpaceX's efficient, low-bureaucracy research approach. Privacy concerns arose from Harvard students' AI glasses, I-XRAY, which can reveal personal information. Meta AI FAIR's Movie Gen model advances media foundation models with high-quality text-to-image and video generation, including synced audio. Humanoid robots like Ameca and Azi now engage in expressive conversations using ChatGPT. xAI rapidly deployed 100K Nvidia H100 GPUs in 19 days, with CEO Jensen Huang commending Elon Musk. Leading AI research labs compared include Meta-FAIR, Google DeepMind, and Microsoft Research. Skepticism about LLM intelligence was voiced by Sam Pino, emphasizing limitations in novel problem-solving despite strong memorization.
AIPhone 16: the Visual Intelligence Phone
reflection-70b llama-3-70b qwen-2-72b llama-3-1-405b claude gpt-4 gemini apple openai weights-biases vision video-understanding benchmarking planning model-evaluation privacy ai-integration instruction-following yann-lecun
Apple announced the new iPhone 16 lineup featuring Visual Intelligence, a new AI capability integrated with Camera Control, Apple Maps, and Siri, emphasizing privacy and default service use over third-party AI like OpenAI. Apple Photos now includes advanced video understanding with timestamp recognition. Meanwhile, Reflection-70B claims to be a top open-source model but benchmarks show it performs close to Llama 3 70B and slightly worse than Qwen 2 72B. Yann LeCun highlighted ongoing challenges with LLM planning abilities, noting models like Llama-3.1-405b and Claude show some skill, while GPT-4 and Gemini lag behind. Weights & Biases is sponsoring an event to advance LLM evaluation techniques with prizes and API access.
Test-Time Training, MobileLLM, Lilian Weng on Hallucination (Plus: Turbopuffer)
llama-2-7b codegeex4-all-9b mamba facebook-research meta-ai-fair tsinghua-university hallucination-detection anti-hallucination-methods on-device-ai model-architecture rnn long-context-modeling model-scaling expressive-hidden-states code-generation lilian-weng yann-lecun
Lilian Weng released a comprehensive literature review on hallucination detection and anti-hallucination methods including techniques like FactualityPrompt, SelfCheckGPT, and WebGPT. Facebook AI Research (FAIR) published MobileLLM, a sub-billion parameter on-device language model architecture achieving performance comparable to llama-2-7b with innovations like thin and deep models and shared weights. A new RNN-based LLM architecture with expressive hidden states was introduced, replacing attention mechanisms and scaling better than Mamba and Transformer models for long-context modeling. Additionally, Tsinghua University open sourced CodeGeeX4-ALL-9B, a multilingual code generation model excelling in code assistance.
Ways to use Anthropic's Tool Use GA
claude-3-opus haiku opus convnext anthropic amazon google tool-use function-calling agentic-ai streaming vision parallelization delegation debate specialization open-science superintelligence convolutional-networks self-attention ai-research yann-lecun alex-albert sainingxie
Anthropic launched general availability of tool use/function calling with support for streaming, forced use, and vision, alongside Amazon and Google. Alex Albert shared five architectures for agentic tool use: delegation, parallelization, debate, specialization, and tool suite experts. Anthropic also introduced a self-guided course on tool use. Yann LeCun emphasized ethical open science funding, gradual emergence of superintelligence with safety guardrails, and convolutional networks for image/video processing as competitive with vision transformers. He also noted growth in AI researchers across industry, academia, and government.
1 TRILLION token context, real time, on device?
gemini-1.5-pro gemini-1.5 cartesia mistral-ai scale-ai state-space-models voice-models multimodality model-performance on-device-ai long-context evaluation-leaderboards learning-rate-optimization scientific-publishing research-vs-engineering yann-lecun elon-musk
Cartesia, a startup specializing in state space models (SSMs), launched a low latency voice model outperforming transformer-based models with 20% lower perplexity, 2x lower word error, and 1 point higher NISQA quality. This breakthrough highlights the potential for models that can continuously process and reason over massive streams of multimodal data (text, audio, video) with a trillion token context window on-device. The news also covers recent AI developments including Mistral's Codestral weights release, Schedule Free optimizers paper release, and Scale AI's new elo-style eval leaderboards. Additionally, a debate between yann-lecun and elon-musk on the importance of publishing AI research versus engineering achievements was noted. The Gemini 1.5 Pro/Advanced models were mentioned for their strong performance.
Somebody give Andrej some H100s already
gpt-2 openai fineweb meta-ai-fair nvidia tesla cuda fine-tuning training-time gpu-acceleration convolutional-neural-networks real-time-processing ai-safety ai-regulation andrej-karpathy yann-lecun elon-musk francois-chollet svpino mervenoyann
OpenAI's GPT-2 sparked controversy five years ago for being "too dangerous to release." Now, with FineWeb and llm.c, a tiny GPT-2 model can be trained in 90 minutes for $20 using 8xA100 GPUs, with the full 1.6B model estimated to take 1 week and $2.5k. The project is notable for its heavy use of CUDA (75.8%) aiming to simplify the training stack. Meanwhile, a Twitter debate between Yann LeCun and Elon Musk highlighted the importance of convolutional neural networks (CNNs) in real-time image processing for autonomous driving, with LeCun emphasizing scientific research's role in technological progress. LeCun also criticized AI doomsday scenarios, arguing for cautious optimism about AI safety and regulation.
A quiet weekend
llama-3 dolphin-2.9 pixart-sigma llama-3-70b microsoft coca-cola uber lmsys nous-research mistral-ai ar-interfaces transformers algorithmic-tasks turing-test graph-algorithms embeddings generative-ai model-optimization llm-inference quantization model-deployment yann-lecun
Yann LeCun predicts a shift to AR interfaces with AI assistants in 10-15 years, moving away from smartphones. The Dolphin-2.9 model based on Llama-3 was released, improving quality issues. PixArt Sigma, a 0.6B parameter model, achieves Stable Diffusion 3.0 level performance with complete prompt adherence and local usability. Research shows transformers can use meaningless filler tokens for algorithmic tasks with dense supervision. AI-generated restaurant reviews can pass the Turing test, fooling humans and AI detectors. Uber uses graph algorithms and learned embeddings for ETA prediction. Coca-Cola and Microsoft announced a 5-year AI partnership to accelerate cloud and generative AI initiatives. The Llama-3 70B model can run on a single 4GB GPU using AirLLM optimization without quantization but is slow. Mistral.rs is introduced as a fast LLM inference platform with quantization and OpenAI API compatibility. Only 5% of LLMs make it from prototype to production due to challenges, especially in enterprise. EXL2 and GGUF quantization methods for Llama models show similar perplexity vs model size, with Llama-3 and Llama-2 degrading more under quantization compared to full precision.
Lilian Weng on Video Diffusion
wizardlm-2 llama-3 reka-core devin opus sora openai adobe reka-ai diffusion-models video-generation training-free-adaptation multimodality intuition creativity analogy-recognition self-improving-ai model-recognition agi-timelines model-performance startup-competition lilian-weng sam-altman geoffrey-hinton yann-lecun
OpenAI expands with a launch in Japan, introduces a Batch API, and partners with Adobe to bring the Sora video model to Premiere Pro. Reka AI releases the Reka Core multimodal language model. WizardLM-2 is released showing impressive performance, and Llama 3 news is anticipated soon. Geoffrey Hinton highlights AI models exhibiting intuition, creativity, and analogy recognition beyond humans. The Devin AI model notably contributes to its own codebase. Opus demonstrates the ability to recognize its own generated outputs. Sam Altman warns startups about being steamrolled by OpenAI if they don't adapt quickly. Yann LeCun discusses AGI timelines, emphasizing it is inevitable but not imminent or solely from LLMs. Lilian Weng's blog on diffusion models for video generation highlights training-free adaptation as a breakthrough technique.
World_sim.exe
gpt-4 gpt-4o grok-1 llama-cpp claude-3-opus claude-3 gpt-5 nvidia nous-research stability-ai hugging-face langchain anthropic openai multimodality foundation-models hardware-optimization model-quantization float4 float6 retrieval-augmented-generation text-to-video prompt-engineering long-form-rag gpu-optimization philosophy-of-ai agi-predictions jensen-huang yann-lecun sam-altman
NVIDIA announced Project GR00T, a foundation model for humanoid robot learning using multimodal instructions, built on their tech stack including Isaac Lab, OSMO, and Jetson Thor. They revealed the DGX Grace-Blackwell GB200 with over 1 exaflop compute, capable of training GPT-4 1.8T parameters in 90 days on 2000 Blackwells. Jensen Huang confirmed GPT-4 has 1.8 trillion parameters. The new GB200 GPU supports float4/6 precision with ~3 bits per parameter and achieves 40,000 TFLOPs on fp4 with 2x sparsity.
Open source highlights include the release of Grok-1, a 340B parameter model, and Stability AI's SV3D, an open-source text-to-video generation solution. Nous Research collaborated on implementing Steering Vectors in Llama.CPP.
In Retrieval Augmented Generation (RAG), a new 5.5-hour tutorial builds a pipeline using open-source HF models, and LangChain released a video on query routing and announced integration with NVIDIA NIM for GPU-optimized LLM inference.
Prominent opinions include Yann LeCun distinguishing language from other cognitive abilities, Sam Altman predicting AGI arrival in 6 years with a leap from GPT-4 to GPT-5 comparable to GPT-3 to GPT-4, and discussions on the philosophical status of LLMs like Claude. There is also advice against training models from scratch for most companies.
MM1: Apple's first Large Multimodal Model
mm1 gemini-1 command-r claude-3-opus claude-3-sonnet claude-3-haiku claude-3 apple cohere anthropic hugging-face langchain multimodality vqa fine-tuning retrieval-augmented-generation open-source robotics model-training react reranking financial-agents yann-lecun francois-chollet
Apple announced the MM1 multimodal LLM family with up to 30B parameters, claiming performance comparable to Gemini-1 and beating larger older models on VQA benchmarks. The paper targets researchers and hints at applications in embodied agents and business/education. Yann LeCun emphasized that human-level AI requires understanding the physical world, memory, reasoning, and hierarchical planning, while Fran ois Chollet cautioned that NLP is far from solved despite LLM advances. Cohere released Command-R, a model for Retrieval Augmented Generation, and Anthropic highlighted the Claude 3 family (Opus, Sonnet, Haiku) for various application needs. Open-source hardware DexCap enables dexterous robot manipulation data collection affordably. Tools like CopilotKit simplify AI integration into React apps, and migration to Keras 3 with JAX backend offers faster training. New projects improve reranking for retrieval and add financial agents to LangChain. The content includes insights on AI progress, new models, open-source tools, and frameworks.
DeepMind SIMA: one AI, 9 games, 600 tasks, vision+language ONLY
llama-3 claude-3-opus claude-3 gpt-3.5-turbo deepmind cognition-labs deepgram modal-labs meta-ai-fair anthropic multimodality transformer software-engineering ai-agents ai-infrastructure training text-to-speech speech-to-text real-time-processing model-architecture benchmarking andrej-karpathy arav-srinivas francois-chollet yann-lecun soumith-chintala john-carmack
DeepMind SIMA is a generalist AI agent for 3D virtual environments evaluated on 600 tasks across 9 games using only screengrabs and natural language instructions, achieving 34% success compared to humans' 60%. The model uses a multimodal Transformer architecture. Andrej Karpathy outlines AI autonomy progression in software engineering, while Arav Srinivas praises Cognition Labs' AI agent demo. François Chollet expresses skepticism about automating software engineering fully. Yann LeCun suggests moving away from generative models and reinforcement learning towards human-level AI. Meta's Llama-3 training infrastructure with 24k H100 Cluster Pods is shared by Soumith Chintala and Yann LeCun. Deepgram's Aura offers low-latency speech APIs, and Modal Labs' Devin AI demonstrates document navigation and interaction with ComfyUI. Memes and humor circulate in the AI community.
Fixing Gemma
gemma claude-3-opus claude-3 mistral-large gpt-4 google unsloth anthropic mistral-ai finetuning numerical-precision benchmarking structured-data-extraction adaptive-equalizer information-theory hallucination-detection model-stability daniel-han yann-lecun francois-chollet arav-srinivas _aidan_clark_
Google's Gemma model was found unstable for finetuning until Daniel Han from Unsloth AI fixed 8 bugs, improving its implementation. Yann LeCun explained technical details of a pseudo-random bit sequence for adaptive equalizers, while François Chollet discussed the low information bandwidth of the human visual system. Arav Srinivas reported that Claude 3 Opus showed no hallucinations in extensive testing, outperforming GPT-4 and Mistral-Large in benchmarks. Reflections from Yann LeCun highlight ongoing AI progress toward human-level intelligence. The community is shifting pipelines to work better with Claude models, and emotional experiences in ML development were shared by Aidan Clark.
... and welcome AI Twitter!
mistral-large google-gemini google openai apple stripe ai-ethics multilinguality on-device-ai convolutional-neural-networks synthetic-data financial-transaction-systems corporate-culture humor margaret-mitchell john-carmack guillaume-lample sundar-pichai delip-rao santiago-l-valdarrama alex-wang yann-lecun pieter-levels francois-chollet dheliat
The AI Twitter discourse from 2/27-28/2024 covers a broad spectrum including ethical considerations highlighted by Margaret Mitchell around Google Gemini's launch, and John Carmack's insights on evolving coding skills in the AI era. Guillaume Lample announced the release of the Mistral Large multilingual model. Discussions also touched on potential leadership changes at Google involving Sundar Pichai, and OpenAI's possible entry into the synthetic data market as noted by Delip Rao. Technological advancements include Yann LeCun's commentary on running LLMs on mobile devices and Alex Wang's praise for the Apple Vision Pro. Financial platform issues were raised by Pieter Levels regarding Stripe's payment policies. The cultural dynamics within big tech were discussed by François Chollet and Dhéliat. The lighter side of AI was represented by memes and humor from Pieter Levels and AISafetyMemes. This summary reflects the fast-evolving AI landscape blending technical innovation, corporate strategy, ethics, and community culture.
12/10/2023: not much happened today
mixtral-8x7b-32kseqlen mistral-7b stablelm-zephyr-3b openhermes-2.5-neural-chat-v3-3-slerp gpt-3.5 gpt-4 nous-research openai mistral-ai hugging-face ollama lm-studio fine-tuning mixture-of-experts model-benchmarking inference-optimization model-evaluation open-source decentralized-ai gpu-optimization community-engagement andrej-karpathy yann-lecun richard-blythman gabriel-syme pradeep1148 cyborg_1552
Nous Research AI Discord community discussed attending NeurIPS and organizing future AI events in Australia. Highlights include interest in open-source and decentralized AI projects, with Richard Blythman seeking co-founders. Users shared projects like Photo GPT AI and introduced StableLM Zephyr 3B. The Mixtral model, based on Mistral, sparked debate on performance and GPU requirements, with comparisons to GPT-3.5 and potential competitiveness with GPT-4 after fine-tuning. Tools like Tensorboard, Wandb, and Llamahub were noted for fine-tuning and evaluation. Discussions covered Mixture of Experts (MoE) architectures, fine-tuning with limited data, and inference optimization strategies for ChatGPT. Memes and community interactions referenced AI figures like Andrej Karpathy and Yann LeCun. The community also shared resources such as GitHub links and YouTube videos related to these models and tools.