All tags
Topic: "frameworks"
Cognition's DeepWiki, a free encyclopedia of all GitHub repos
o4-mini perception-encoder qwen-2.5-vl dia-1.6b grok-3 gemini-2.5-pro claude-3.7 gpt-4.1 cognition meta-ai-fair alibaba hugging-face openai perplexity-ai vllm vision text-to-speech reinforcement-learning ocr model-releases model-integration open-source frameworks chatbots model-selector silas-alberti mervenoyann reach_vb aravsrinivas vikparuchuri lioronai
Silas Alberti of Cognition announced DeepWiki, a free encyclopedia of all GitHub repos providing Wikipedia-like descriptions and Devin-backed chatbots for public repos. Meta released Perception Encoders (PE) with A2.0 license, outperforming InternVL3 and Qwen2.5VL on vision tasks. Alibaba launched the Qwen Chat App for iOS and Android. Hugging Face integrated the Dia 1.6B SoTA text-to-speech model via FAL. OpenAI expanded deep research usage with a lightweight version powered by o4-mini model, now available to free users. Perplexity AI updated their model selector with Grok 3 Beta, o4-mini, and support for models like gemini 2.5 pro, claude 3.7, and gpt-4.1. vLLM project introduced OpenRLHF framework for reinforcement learning with human feedback. Surya OCR alpha model supports 90+ languages and LaTeX. MegaParse open-source library was introduced for LLM-ready data formats.
not much happened today
gemini-2.0-flash imagen-3 mistral-small-3.1 mistral-3 gpt-4o-mini claude-3.5-haiku olm0-32b qwen-2.5 shieldgemma-2 julian fasttransform nvidia google mistral-ai allen-ai anthropic langchainai perplexity-ai kalshi stripe qodoai multimodality image-generation context-windows model-pricing open-source-models image-classification frameworks python-libraries partnerships jeremyphoward karpathy abacaj mervenoyann
At Nvidia GTC Day 1, several AI updates were highlighted: Google's Gemini 2.0 Flash introduces image input/output but is not recommended for text-to-image tasks, with Imagen 3 preferred for that. Mistral AI released Mistral Small 3.1 with 128k token context window and competitive pricing. Allen AI launched OLMo-32B, an open LLM outperforming GPT-4o mini and Qwen 2.5. ShieldGemma 2 was introduced for image safety classification. LangChainAI announced multiple updates including Julian powered by LangGraph and integration with AnthropicAI's MCP. Jeremy Howard released fasttransform, a Python library for data transformations. Perplexity AI partnered with Kalshi for NCAA March Madness predictions.
DocETL: Agentic Query Rewriting and Evaluation for Complex Document Processing
bitnet-b1.58 llama-3.1-nemotron-70b-instruct gpt-4o claude-3.5-sonnet uc-berkeley deepmind openai microsoft nvidia archetype-ai boston-dynamics toyota-research google adobe openai mistral tesla meta-ai-fair model-optimization on-device-ai fine-tuning large-corpus-processing gpu-acceleration frameworks model-benchmarking rohanpaul_ai adcock_brett david-patterson
UC Berkeley's EPIC lab introduces innovative LLM data operators with projects like LOTUS and DocETL, focusing on effective programming and computation over large data corpora. This approach contrasts GPU-rich big labs like Deepmind and OpenAI with GPU-poor compound AI systems. Microsoft open-sourced BitNet b1.58, a 1-bit ternary parameter LLM enabling 4-20x faster training and on-device inference at human reading speeds. Nvidia released Llama-3.1-Nemotron-70B-Instruct, a fine-tuned open-source model outperforming GPT-4o and Claude-3.5-sonnet. These developments highlight advances in model-optimization, on-device-ai, and fine-tuning.