All tags
Model: "stable-diffusion-3.5"
not much happened today
smollm2 llama-3-2 stable-diffusion-3.5 claude-3.5-sonnet gemini openai anthropic google meta-ai-fair suno-ai perplexity-ai on-device-ai model-performance robotics multimodality ai-regulation model-releases natural-language-processing prompt-engineering agentic-ai ai-application model-optimization sam-altman akhaliq arav-srinivas labenz loubnabenallal1 alexalbert fchollet stasbekman svpino rohanpaul_ai hamelhusain
ChatGPT Search was launched by Sam Altman, who called it his favorite feature since ChatGPT's original launch, doubling his usage. Comparisons were made between ChatGPT Search and Perplexity with improvements noted in Perplexity's web navigation. Google introduced a "Grounding" feature in the Gemini API & AI Studio enabling Gemini models to access real-time web information. Despite Gemini's leaderboard performance, developer adoption lags behind OpenAI and Anthropic. SmolLM2, a new small, powerful on-device language model, outperforms Meta's Llama 3.2 1B. A Claude desktop app was released for Mac and Windows. Meta AI announced robotics advancements including Meta Sparsh, Meta Digit 360, and Meta Digit Plexus. Stable Diffusion 3.5 Medium, a 2B parameter model with a permissive license, was released. Insights on AGI development suggest initial inferiority but rapid improvement. Anthropic advocates for early targeted AI regulation. Discussions on ML specialization predict training will concentrate among few companies, while inference becomes commoditized. New AI tools include Suno AI Personas for music creation, PromptQL for natural language querying over data, and Agent S for desktop task automation. Humor was shared about Python environment upgrades.
s{imple|table|calable} Consistency Models
llama-3-70b llama-3-405b llama-3-1 stable-diffusion-3.5 gpt-4 stability-ai tesla cerebras cohere langchain model-distillation diffusion-models continuous-time-consistency-models image-generation ai-hardware inference-speed multilingual-models yang-song
Model distillation significantly accelerates diffusion models, enabling near real-time image generation with only 1-4 sampling steps, as seen in BlinkShot and Flux Schnell. Research led by Yang Song introduced simplified continuous-time consistency models (sCMs), achieving under 10% FID difference in just 2 steps and scaling up to 1.5B parameters for higher quality. On AI hardware, Tesla is deploying a 50k H100 cluster potentially capable of completing GPT-4 training in under three weeks, while Cerebras Systems set a new inference speed record on Llama 3.1 70B with their wafer-scale AI chips. Stability AI released Stable Diffusion 3.5 and its Turbo variant, and Cohere launched new multilingual models supporting 23 languages with state-of-the-art performance. LangChain also announced ecosystem updates.
not much happened today
claude-3.5-sonnet claude-3.5-haiku o1-preview mochi-1 stable-diffusion-3.5 embed-3 kerashub differential-transformer anthropic openai cohere microsoft computer-use coding-performance video-generation fine-tuning multimodality transformers attention-mechanisms model-optimization alexalbert fchollet rasbt
Anthropic released upgraded Claude 3.5 Sonnet and Claude 3.5 Haiku models featuring a new computer use capability that allows interaction with computer interfaces via screenshots and actions like mouse movement and typing. The Claude 3.5 Sonnet achieved state-of-the-art coding performance on SWE-bench Verified with a 49% score, surpassing OpenAI's o1-preview. Anthropic focuses on teaching general computer skills rather than task-specific tools, with expected rapid improvements. Other releases include Mochi 1, an open-source video generation model, Stable Diffusion 3.5 with Large and Medium variants, and Embed 3 by Cohere, a multimodal embedding model for text and image search. KerasHub was launched by François Chollet, unifying KerasNLP and KerasCV with 37 pretrained models. Microsoft introduced the Differential Transformer to reduce attention noise via differential attention maps, and research on transformer attention layers was shared by Rasbt.