All tags
Topic: "agentic-rag"
Qwen with Questions: 32B open weights reasoning model nears o1 in GPQA/AIME/Math500
deepseek-r1 qwq gpt-4o claude-3.5-sonnet qwen-2.5 llama-cpp deepseek sambanova hugging-face dair-ai model-releases benchmarking fine-tuning sequential-search inference model-deployment agentic-rag external-tools multi-modal-models justin-lin clementdelangue ggerganov vikparuchuri
DeepSeek r1 leads the race for "open o1" models but has yet to release weights, while Justin Lin released QwQ, a 32B open weight model that outperforms GPT-4o and Claude 3.5 Sonnet on benchmarks. QwQ appears to be a fine-tuned version of Qwen 2.5, emphasizing sequential search and reflection for complex problem-solving. SambaNova promotes its RDUs as superior to GPUs for inference tasks, highlighting the shift from training to inference in AI systems. On Twitter, Hugging Face announced CPU deployment for llama.cpp instances, Marker v1 was released as a faster and more accurate deployment tool, and Agentic RAG developments focus on integrating external tools and advanced LLM chains for improved response accuracy. The open-source AI community sees growing momentum with models like Flux gaining popularity, reflecting a shift towards multi-modal AI models including image, video, audio, and biology.
1/8/2024: The Four Wars of the AI Stack
mixtral mistral nous-research openai mistral-ai hugging-face context-window distributed-models long-context hierarchical-embeddings agentic-rag fine-tuning synthetic-data oil-and-gas embedding-datasets mixture-of-experts model-comparison
The Nous Research AI Discord discussions highlighted several key topics including the use of DINO, CLIP, and CNNs in the Obsidian Project. A research paper on distributed models like DistAttention and DistKV-LLM was shared to address cloud-based LLM service challenges. Another paper titled 'Self-Extend LLM Context Window Without Tuning' argued that existing LLMs can handle long contexts inherently. The community also discussed AI models like Mixtral, favored for its 32k context window, and compared it with Mistral and Marcoroni. Other topics included hierarchical embeddings, agentic retrieval-augmented generation (RAG), synthetic data for fine-tuning, and the application of LLMs in the oil & gas industry. The launch of the AgentSearch-V1 dataset with one billion embedding vectors was also announced. The discussions covered mixture-of-experts (MoE) implementations and the performance of smaller models.