All tags
Topic: "music-generation"
Not much happened today
mistral-small-3.2 magenta-realtime afm-4.5b llama-3 openthinker3-7b deepseek-r1-distill-qwen-7b storm qwen2-vl gpt-4o dino-v2 sakana-ai mistral-ai google arcee-ai deepseek-ai openai amazon gdm reinforcement-learning chain-of-thought fine-tuning function-calling quantization music-generation foundation-models reasoning text-video model-compression image-classification evaluation-metrics sama
Sakana AI released Reinforcement-Learned Teachers (RLTs), a novel technique using smaller 7B parameter models trained via reinforcement learning to teach reasoning through step-by-step explanations, accelerating Chain-of-Thought learning. Mistral AI updated Mistral Small 3.2 improving instruction following and function calling with experimental FP8 quantization. Google Magenta RealTime, an 800M parameter open-weights model for real-time music generation, was released. Arcee AI launched AFM-4.5B, a sub-10B parameter foundation model extended from Llama 3. OpenThinker3-7B was introduced as a new state-of-the-art 7B reasoning model with a 33% improvement over DeepSeek-R1-Distill-Qwen-7B. The STORM text-video model compresses video input by 8x using Mamba layers and outperforms GPT-4o on MVBench with 70.6%. Discussions on reinforcement learning algorithms PPO vs. GRPO and insights on DINOv2's performance on ImageNet-1k were also highlighted. "A very quiet day" in AI news with valuable workshops from OpenAI, Amazon, and GDM.
The Quiet Rise of Claude Code vs Codex
mistral-small-3.2 qwen3-0.6b llama-3-1b gemini-2.5-flash-lite gemini-app magenta-real-time apple-3b-on-device mistral-ai hugging-face google-deepmind apple artificial-analysis kuaishou instruction-following function-calling model-implementation memory-efficiency 2-bit-quantization music-generation video-models benchmarking api reach_vb guillaumelample qtnx_ shxf0072 rasbt demishassabis artificialanlys osanseviero
Claude Code is gaining mass adoption, inspiring derivative projects like OpenCode and ccusage, with discussions ongoing in AI communities. Mistral AI released Mistral Small 3.2, a 24B parameter model update improving instruction following and function calling, available on Hugging Face and supported by vLLM. Sebastian Raschka implemented Qwen3 0.6B from scratch, noting its deeper architecture and memory efficiency compared to Llama 3 1B. Google DeepMind showcased Gemini 2.5 Flash-Lite's UI code generation from visual context and added video upload support in the Gemini App. Apple's new 3B parameter on-device foundation model was benchmarked, showing slower speed but efficient memory use via 2-bit quantization, suitable for background tasks. Google DeepMind also released Magenta Real-time, an 800M parameter music generation model licensed under Apache 2.0, marking Google's 1000th model on Hugging Face. Kuaishou launched KLING 2.1, a new video model accessible via API.
Cursor @ $9b, OpenAI Buys Windsurf @ $3b
llama-nemotron-ultra llama-nemotron-super llama-nemotron-nano qwen3-235b-a22b prover-v2 phi-4-reasoning ernie-4.5-turbo ernie-x1-turbo suno-v4.5 gen-4-references o1-mini openai cursor nvidia alibaba deepseek microsoft baidu suno runway keras reasoning inference-efficiency open-license moe-models math-reasoning theorem-proving model-performance music-generation image-generation recommender-systems tpu-optimization _akhaliq adcock_brett lmarena_ai fchollet
OpenAI is reportedly close to closing a deal with Windsurf, coinciding with Cursor's $900M funding round at a $9B valuation. Nvidia launched the Llama-Nemotron series featuring models from 8B to 253B parameters, praised for reasoning and inference efficiency. Alibaba released the Qwen3 family with MoE and dense models up to 235B parameters, ranking highly in coding and math benchmarks. DeepSeek introduced Prover-V2, an open-source AI for math reasoning with an 88.9% pass rate on MiniF2F-test. Microsoft released reasoning-focused Phi-4 models, outperforming OpenAI's o1-mini. Baidu debuted turbo versions of ERNIE 4.5 and X1 for faster, cheaper inference. Suno v4.5 added advanced AI music generation features, while Runway Gen-4 References enable placing characters into scenes with high consistency. KerasRS, a new recommender system library optimized for TPUs, was released by Fran ois Chollet.
12/26/2023: not much happened today
llava exllama2 meta-ai-fair google-deepmind gpu-offloading vram-utilization model-conversion moe-models multimodality model-performance hardware-configuration model-saving chatml installation-issues music-generation
LM Studio users extensively discussed its performance, installation issues on macOS, and upcoming features like Exllama2 support and multimodality with the Llava model. Conversations covered GPU offloading, vRAM utilization, MoE model expert selection, and model conversion compatibility. The community also addressed inefficient help requests referencing the blog 'Don't Ask to Ask, Just Ask'. Technical challenges with ChromaDB Plugin, server vs desktop hardware performance, and saving model states with Autogen were highlighted. Discussions included comparisons with other chatbots and mentions of AudioCraft from meta-ai-fair and MusicLM from google-deepmind for music generation.