All tags
Topic: "knowledge-graphs"
The AI Nobel Prize
claude-3.5-sonnet reka-flash got openai anthropic reka-ai zep artificial-neural-networks nobel-prize knowledge-graphs memory-layers real-time-voice-api vision fine-tuning prompt-caching multimodality function-calling ocr open-source single-sign-on software-testing ai-assisted-coding ai-ethics geoff-hinton john-hopfield philschmid alexalbert mervenoyann clementdelangue svpino bindureddy ylecun rohanpaul_ai
Geoff Hinton and John Hopfield won the Nobel Prize in Physics for their work on Artificial Neural Networks. The award citation spans 14 pages highlighting their contributions. Zep released a new community edition of their low-latency memory layer for AI agents, emphasizing knowledge graphs for memory. At OpenAI's DevDay, new features like real-time voice API, vision model fine-tuning, and prompt caching with a 50% discount on reused tokens were introduced. Anthropic's Claude 3.5 Sonnet was recognized as the best model currently. Reka AI Labs updated their Reka Flash model with enhanced multimodal and function calling capabilities. The GOT (Generic OCR Transformer) achieved 98.79% accuracy on OCR benchmarks. Discussions on open-source AI models highlighted their role in fostering competition and decentralization. Software development insights included the importance of Single Sign-On (SSO), thorough testing, and AI-assisted coding workflows. Ethical and societal topics covered critiques of tax policies and the appointment of France's first Minister of AI.
GraphRAG: The Marriage of Knowledge Graphs and RAG
gemma-2 llama-3-70b claude-3.5-sonnet nemotron-340b qwen2-72b llama-3 microsoft-research anthropic nvidia hugging-face retrieval-augmented-generation knowledge-graphs token-usage inference-time attention-mechanisms instruction-following coding math long-range-reasoning synthetic-data dataset-release fine-tuning context-windows function-calling travis-fischer rasbt alexandr-wang osanseviero rohanpaul_ai hamelhusain svpino aaaazzam omarsar0
Microsoft Research open sourced GraphRAG, a retrieval augmented generation (RAG) technique that extracts knowledge graphs from sources and clusters them for improved LLM answers, though it increases token usage and inference time. Gemma 2 models were released focusing on efficient small LLMs with innovations like sliding window attention and RMS norm, nearly matching the larger Llama 3 70B. Anthropic's Claude 3.5 Sonnet leads in instruction following and coding benchmarks, while Nvidia's Nemotron 340B model was released in June. Qwen2-72B tops the HuggingFace Open LLM leaderboard excelling in math and long-range reasoning. Discussions on RAG highlighted its limitations and improvements in context usage via function calls. A persona-driven synthetic data generation approach introduced 1 billion personas, with a fine-tuned model matching GPT-4 performance on math benchmarks at 7B scale. The 200GB AutoMathText dataset was also noted for math data synthesis.
HippoRAG: First, do know(ledge) Graph
qwen-2 gpt-4 hipporag alibaba openai knowledge-graphs personalized-pagerank multi-hop-retrieval chain-of-thought implicit-reasoning sparse-autoencoders model-interpretability model-efficiency model-architecture fine-tuning reinforcement-learning rohanpaul_ai omarsar0 nabla_theta huybery
Alibaba released new open-source Qwen2 models ranging from 0.5B to 72B parameters, achieving SOTA results on benchmarks like MMLU and HumanEval. Researchers introduced Sparse Autoencoders to interpret GPT-4 neural activity, improving feature representation. The HippoRAG paper proposes a hippocampus-inspired retrieval augmentation method using knowledge graphs and Personalized PageRank for efficient multi-hop reasoning. New techniques like Stepwise Internalization enable implicit chain-of-thought reasoning in LLMs, enhancing accuracy and speed. The Buffer of Thoughts (BoT) method improves reasoning efficiency with significant cost reduction. A novel scalable MatMul-free LLM architecture competitive with SOTA Transformers at billion-parameter scale was also presented. "Single-Step, Multi-Hop retrieval" is highlighted as a key advancement in retrieval speed and cost.
12/28/2023: Smol Talk updates
tinyllama-1.1b mixtral tinygpt-v nous-research tyrannosaurus latex benchmarking knowledge-graphs model-finetuning tokenization decentralized-computation philosophy-of-ai multimodality vision open-source-models gary-marcus
Nous Research AI Discord discussions covered topics such as AI placement charts, ChatGPT's issues with Latex math format compatibility with Obsidian, and performance metrics of the TinyLlama 1.1B model on various benchmarks. Users shared resources including the math-centric corpus MathPile, knowledge graph building methods, and open-source large language model repositories. Technical discussions included decentralized computation feasibility for models like Mixtral, philosophical debates on AI sentience, and strategies for model finetuning and token counting. The community also discussed the Obsidian model, vision model training, and the release of the multimodal TinyGPT-V model by Tyrannosaurus. "ChatGPT not generating Latex math format compatible with Obsidian" and "optimistic about human-level AI within our lifetime" were notable quotes.