All tags
Topic: "speech-to-text"
not much happened today
gemma-3n hunyuan-a13b flux-1-kontext-dev mercury fineweb2 qwen-vlo o3-mini o4-mini google-deepmind tencent black-forest-labs inception-ai qwen kyutai-labs openai langchain langgraph hugging-face ollama unslothai nvidia amd multimodality mixture-of-experts context-windows tool-use coding image-generation diffusion-models dataset-release multilinguality speech-to-text api prompt-engineering agent-frameworks open-source model-release demishassabis reach_vb tri_dao osanseviero simonw clementdelangue swyx hwchase17 sydneyrunkle
Google released Gemma 3n, a multimodal model for edge devices available in 2B and 4B parameter versions, with support across major frameworks like Transformers and Llama.cpp. Tencent open-sourced Hunyuan-A13B, a Mixture-of-Experts (MoE) model with 80B total parameters and a 256K context window, optimized for tool calling and coding. Black Forest Labs released FLUX.1 Kontext [dev], an open image AI model gaining rapid Hugging Face adoption. Inception AI Labs launched Mercury, the first commercial-scale diffusion LLM for chat. The FineWeb2 multilingual pre-training dataset paper was released, analyzing data quality impacts. The Qwen team released Qwen-VLo, a unified visual understanding and generation model. Kyutai Labs released a top-ranked open-source speech-to-text model running on Macs and iPhones. OpenAI introduced Deep Research API with o3/o4-mini models and open-sourced prompt rewriter methodology, integrated into LangChain and LangGraph. The open-source Gemini CLI gained over 30,000 GitHub stars as an AI terminal agent.
Promptable Prosody, SOTA ASR, and Semantic VAD: OpenAI revamps Voice AI
gpt-4o-transcribe gpt-4o-mini-tts o1-pro kokoro-82m openai replicate speech-to-text text-to-speech voice-activity-detection prompt-engineering real-time-processing model-release api function-calling structured-outputs model-performance juberti sama reach_vb kevinweil omarsar0
OpenAI has launched three new state-of-the-art audio models in their API, including gpt-4o-transcribe, a speech-to-text model outperforming Whisper, and gpt-4o-mini-tts, a text-to-speech model with promptable prosody allowing control over timing and emotion. The Agents SDK now supports audio, enabling voice agents. OpenAI also updated turn detection for real-time voice activity detection (VAD) based on speech content. Additionally, OpenAI's o1-pro model is available to select developers with advanced features like vision and function calling, though at higher compute costs. The community shows strong enthusiasm for these audio advancements, with a radio contest for TTS creations underway. Meanwhile, Kokoro-82M v1.0 emerges as a leading open weights TTS model with competitive pricing on Replicate.
The AI Search Wars Have Begun — SearchGPT, Gemini Grounding, and more
gpt-4o o1-preview claude-3.5-sonnet universal-2 openai google gemini nyt perplexity-ai glean nvidia langchain langgraph weights-biases cohere weaviate fine-tuning synthetic-data distillation hallucinations benchmarking speech-to-text robotics neural-networks ai-agents sam-altman alexalbert__ _jasonwei svpino drjimfan virattt
ChatGPT launched its search functionality across all platforms using a fine-tuned version of GPT-4o with synthetic data generation and distillation from o1-preview. This feature includes a Chrome extension promoted by Sam Altman but has issues with hallucinations. The launch coincides with Gemini introducing Search Grounding after delays. Notably, The New York Times is not a partner due to a lawsuit against OpenAI. The AI search competition intensifies with consumer and B2B players like Perplexity and Glean. Additionally, Claude 3.5 Sonnet achieved a new benchmark record on SWE-bench Verified, and a new hallucination evaluation benchmark, SimpleQA, was introduced. Other highlights include the Universal-2 speech-to-text model with 660M parameters and HOVER, a neural whole-body controller for humanoid robots trained in NVIDIA Isaac simulation. AI hedge fund teams using LangChain and LangGraph were also showcased. The news is sponsored by the RAG++ course featuring experts from Weights & Biases, Cohere, and Weaviate.
DeepMind SIMA: one AI, 9 games, 600 tasks, vision+language ONLY
llama-3 claude-3-opus claude-3 gpt-3.5-turbo deepmind cognition-labs deepgram modal-labs meta-ai-fair anthropic multimodality transformer software-engineering ai-agents ai-infrastructure training text-to-speech speech-to-text real-time-processing model-architecture benchmarking andrej-karpathy arav-srinivas francois-chollet yann-lecun soumith-chintala john-carmack
DeepMind SIMA is a generalist AI agent for 3D virtual environments evaluated on 600 tasks across 9 games using only screengrabs and natural language instructions, achieving 34% success compared to humans' 60%. The model uses a multimodal Transformer architecture. Andrej Karpathy outlines AI autonomy progression in software engineering, while Arav Srinivas praises Cognition Labs' AI agent demo. François Chollet expresses skepticism about automating software engineering fully. Yann LeCun suggests moving away from generative models and reinforcement learning towards human-level AI. Meta's Llama-3 training infrastructure with 24k H100 Cluster Pods is shared by Soumith Chintala and Yann LeCun. Deepgram's Aura offers low-latency speech APIs, and Modal Labs' Devin AI demonstrates document navigation and interaction with ComfyUI. Memes and humor circulate in the AI community.