All tags
Company: "lighton"
not much happened today
embeddinggemma qwen-2.5-coder minicpm-v-4.5 gpt-4o gemini-2.0-pro google-deepmind hugging-face jina-ai lighton microsoft stanford openai ollama weaviate langchain llamaindex embeddings retrieval-augmented-generation quantization multilingual-models on-device-ai semantic-search contrastive-learning dataset-release vision multimodality video-generation text-to-speech optimizer-benchmarking training-recipes model-compression video-token-compression fine-tuning osanseviero _philschmid tomaarsen ollama weaviate_io lusxvr andimarafioti thibaudfrere _akhaliq clementdelangue gordonwetzstein konstmish wen_kaiyue percyliang
Google DeepMind released EmbeddingGemma (308M), a small multilingual embedding model optimized for on-device retrieval-augmented generation and semantic search, supporting over 100 languages and running efficiently with quantization and EdgeTPU latency under 15ms. Jina AI introduced new code-focused embedding models (0.5B/1.5B) with GGUF quantization, achieving state-of-the-art retrieval across multiple languages and tasks. LightOn demonstrated large-scale retrieval training without distillation using contrastive training on billions of passages. Hugging Face released the FineVision dataset with 17.3M images and 9.5B answer tokens for vision-language model training, showing significant benchmark improvements. The MiniCPM-V 4.5 (8B) multimodal model reported surpassing GPT-4o and Gemini-2.0 Pro on OpenCompass benchmarks with innovative video token compression. Microsoft’s VibeVoice TTS and Stanford’s Mixture-of-Contexts video generation also featured. Additionally, a Stanford study benchmarked optimizers like Muon, Soap, Mars, and Sophia, finding diminishing speedups over AdamW at larger scales but advantages at smaller scales. The new ChatGPT branching feature was noted for its simplicity and popularity. "Everyone's a decacorn now."
Gemini 2.5 Pro (06-05) launched at AI Engineer World's Fair
gemini-2.5-pro qwen3-embedding-8b openthinker3-7b google qwen lighton morph-labs openai nvidia benchmarking reasoning coding math embedding-models late-interaction dataset-release model-performance model-architecture ai-conferences greg_brockman jensen_huang christian_szegedy swyx
At the second day of AIE, Google's Gemini 2.5 Pro reclaimed the top spot on the LMArena leaderboard with a score of 1470 and a +24 Elo increase, showing improvements in coding, reasoning, and math. Qwen3 released state-of-the-art embedding and reranking models, with Qwen3-Embedding-8B topping the MTEB multilingual leaderboard. OpenThinker3-7B emerged as the top open reasoning model trained on the OpenThoughts3-1.2M dataset, outperforming previous models by 33%. LightOn introduced FastPlaid, achieving up to a 554% speedup for late-interaction models. Morph Labs hired Christian Szegedy as Chief Scientist to lead Verified Superintelligence development. The AI Engineer World's Fair featured a fireside chat with Greg Brockman and NVIDIA CEO Jensen Huang, highlighting the return of basic research and engineering best practices.