All tags
Person: "lilian-weng"
Test-Time Training, MobileLLM, Lilian Weng on Hallucination (Plus: Turbopuffer)
llama-2-7b codegeex4-all-9b mamba facebook-research meta-ai-fair tsinghua-university hallucination-detection anti-hallucination-methods on-device-ai model-architecture rnn long-context-modeling model-scaling expressive-hidden-states code-generation lilian-weng yann-lecun
Lilian Weng released a comprehensive literature review on hallucination detection and anti-hallucination methods including techniques like FactualityPrompt, SelfCheckGPT, and WebGPT. Facebook AI Research (FAIR) published MobileLLM, a sub-billion parameter on-device language model architecture achieving performance comparable to llama-2-7b with innovations like thin and deep models and shared weights. A new RNN-based LLM architecture with expressive hidden states was introduced, replacing attention mechanisms and scaling better than Mamba and Transformer models for long-context modeling. Additionally, Tsinghua University open sourced CodeGeeX4-ALL-9B, a multilingual code generation model excelling in code assistance.
Lilian Weng on Video Diffusion
wizardlm-2 llama-3 reka-core devin opus sora openai adobe reka-ai diffusion-models video-generation training-free-adaptation multimodality intuition creativity analogy-recognition self-improving-ai model-recognition agi-timelines model-performance startup-competition lilian-weng sam-altman geoffrey-hinton yann-lecun
OpenAI expands with a launch in Japan, introduces a Batch API, and partners with Adobe to bring the Sora video model to Premiere Pro. Reka AI releases the Reka Core multimodal language model. WizardLM-2 is released showing impressive performance, and Llama 3 news is anticipated soon. Geoffrey Hinton highlights AI models exhibiting intuition, creativity, and analogy recognition beyond humans. The Devin AI model notably contributes to its own codebase. Opus demonstrates the ability to recognize its own generated outputs. Sam Altman warns startups about being steamrolled by OpenAI if they don't adapt quickly. Yann LeCun discusses AGI timelines, emphasizing it is inevitable but not imminent or solely from LLMs. Lilian Weng's blog on diffusion models for video generation highlights training-free adaptation as a breakthrough technique.
Andrew likes Agents
gpt-3.5 gpt-4 cyberrealistic_v40 platypus-xl sdxl-lightning openai stability-ai agents human-eval-benchmark fine-tuning local-llm-deployment inference-speed image-generation lora upscaling workflow-optimization andrew-ng lilian-weng emad
Andrew Ng's The Batch writeup on Agents highlighted the significant improvement in coding benchmark performance when using an iterative agent workflow, with GPT-3.5 wrapped in an agent loop achieving up to 95.1% correctness on HumanEval, surpassing GPT-4 zero-shot at 67.0%. The report also covers new developments in Stable Diffusion models like Cyberrealistic_v40, Platypus XL, and SDXL Lightning for Naruto-style image generation, alongside innovations in LoRA and upscaling techniques. Discussions on local LLM deployment and optimization focus on hardware setups and finetuning strategies for efficient inference and multi-user serving. Emad's departure from Stability AI and new Sora videos from OpenAI were also noted.