All tags
Topic: "transformer-architecture"
Genesis: Generative Physics Engine for Robotics (o1-mini version)
o1 o1-preview gpt-4o claude-3.5-sonnet gemini-2.0-pro llama-3-3b llama-3-70b openai google-deepmind meta-ai-fair hugging-face function-calling structured-outputs vision performance-benchmarks sdk webrtc reasoning math code-generation transformer-architecture model-training humanoid-robots search model-efficiency dataset-sharing aidan_mclau sundarpichai adcock_brett
OpenAI launched the o1 model API featuring function calling, structured outputs, vision support, and developer messages, achieving 60% fewer reasoning tokens than its preview. The model excels in math and code with a 0.76 LiveBench Coding score, outperforming Sonnet 3.5. Beta SDKs for Go and Java and WebRTC support with 60% lower prices were also released. Google Gemini 2.0 Pro (Gemini Exp 1206) deployment accelerated, showing improved coding, math, and reasoning performance. Meta AI FAIR introduced research on training transformers directly on raw bytes using dynamic entropy-based patching. Commercial humanoid robots were successfully deployed by an industry player. Hugging Face researchers demonstrated that their 3B Llama model can outperform the 70B Llama model on MATH-500 accuracy using search techniques, highlighting efficiency gains with smaller models. Concerns about reproducibility and domain-specific limitations were noted.
Meta BLT: Tokenizer-free, Byte-level LLM
byte-latent-transformer llama-3 phi-4 gpt-4o command-r7b meta-ai-fair llamaindex microsoft deepseek-ai openai cohere anthropic tokenization transformer-architecture model-efficiency benchmarking multimodality vision reinforcement-learning model-scaling jailbreaking model-optimization
Meta AI introduces the Byte Latent Transformer (BLT), a tokenizer-free architecture that dynamically forms byte patches for efficient compute allocation, outperforming Llama 3 on benchmarks including the CUTE benchmark. The model was trained on approximately 1 trillion tokens and features a three-block transformer design with local and global components. This approach challenges traditional tokenization and may enable new multimodal capabilities such as direct file interaction without retrieval-augmented generation. Additionally, Microsoft announced the Phi-4 14B parameter model achieving state-of-the-art results on STEM and reasoning benchmarks, surpassing GPT-4o. DeepSeek AI launched new vision-language models based on their MoE architecture with sizes ranging from 1.0B to 27B parameters. OpenAI released a new Projects feature for ChatGPT, and Cohere introduced their smallest and fastest Command R7B model. Anthropic published research on "Best-of-N Jailbreaking" vulnerabilities across text, vision, and audio models. Industry discussion highlights a trend of decreasing frontier LLM sizes, with GPT-4 at approximately 1.8 trillion parameters compared to newer models.
Shazeer et al (2024): you are overpaying for inference >13x
claude-3.5-sonnet claude-3-opus character.ai anthropic memory-efficiency kv-cache attention-mechanisms stateful-caching int8-precision transformer-architecture scaling overfitting architecture noam-shazeer kevin-a-fischer sebastien-bubeck _aidan_clark_ andrej-karpathy
Noam Shazeer explains how Character.ai serves 20% of Google Search Traffic for LLM inference while reducing serving costs by a factor of 33 compared to late 2022, with leading commercial APIs costing at least 13.5X more. Key memory-efficiency techniques include MQA > GQA reducing KV cache size by 8X, hybrid attention horizons, cross-layer KV-sharing, stateful caching with a 95% cache rate, and native int8 precision with custom kernels. Anthropic released Claude 3.5 Sonnet, which outperforms Claude 3 Opus at twice the speed and one-fifth the cost, passing 64% of internal pull request tests and introducing new features like Artifacts for real-time doc and code generation. Discussions on LLM architecture highlight the dominance of transformers, challenges in scaling and overfitting, and the importance of architecture work for progress.
Life after DPO (RewardBench)
gpt-3 gpt-4 gpt-5 gpt-6 llama-3-8b llama-3 claude-3 gemini x-ai openai mistral-ai anthropic cohere meta-ai-fair hugging-face nvidia reinforcement-learning-from-human-feedback direct-preference-optimization reward-models rewardbench language-model-history model-evaluation alignment-research preference-datasets personalization transformer-architecture nathan-lambert chris-manning elon-musk bindureddy rohanpaul_ai nearcyan
xAI raised $6 billion at a $24 billion valuation, positioning it among the most highly valued AI startups, with expectations to fund GPT-5 and GPT-6 class models. The RewardBench tool, developed by Nathan Lambert, evaluates reward models (RMs) for language models, showing Cohere's RMs outperforming open-source alternatives. The discussion highlights the evolution of language models from Claude Shannon's 1948 model to GPT-3 and beyond, emphasizing the role of RLHF (Reinforcement Learning from Human Feedback) and the newer DPO (Direct Preference Optimization) method. Notably, some Llama 3 8B reward model-focused models are currently outperforming GPT-4, Cohere, Gemini, and Claude on the RewardBench leaderboard, raising questions about reward hacking. Future alignment research directions include improving preference datasets, DPO techniques, and personalization in language models. The report also compares xAI's valuation with OpenAI, Mistral AI, and Anthropic, noting speculation about xAI's spending on Nvidia hardware.
GPT-4o: the new SOTA-EVERYTHING Frontier model (GPT4T version)
gpt-4o gpt-3.5 llama-3 openai hugging-face nous-research eleutherai hazyresearch real-time-reasoning coding-capabilities fine-tuning knowledge-distillation hardware-optimization quantization multimodality mixture-of-experts efficient-attention model-scaling depth-upscaling transformer-architecture gpu-optimization prompt-engineering
OpenAI launched GPT-4o, a frontier model supporting real-time reasoning across audio, vision, and text, now free for all ChatGPT users with enhanced coding capabilities and upcoming advanced voice and video features. Discussions cover open-source LLMs like Llama 3, fine-tuning techniques including knowledge distillation for GPT-3.5, and hardware optimization strategies such as quantization. Emerging architectures include multimodal integrations with ChatGPT voice and Open Interpreter API, Mixture of Experts models combining autoregressive and diffusion approaches, and novel designs like the YOCO architecture and ThunderKittens DSL for efficient GPU use. Research advances in efficient attention methods like Conv-Basis using FFT and model scaling techniques such as depth upscaling were also highlighted.
1/9/2024: Nous Research lands $5m for Open Source AI
qlora phi-3 mixtral ollama nous-research openai rabbit-tech context-window fine-tuning synthetic-data activation-beacon transformer-architecture seed-financing real-time-voice-agents trillion-parameter-models kenakafrosty _stilic_ teknium
Nous Research announced a $5.2 million seed financing focused on Nous-Forge, aiming to embed transformer architecture into chips for powerful servers supporting real-time voice agents and trillion parameter models. Rabbit R1 launched a demo at CES with mixed reactions. OpenAI shipped the GPT store and briefly leaked an upcoming personalization feature. A new paper on Activation Beacon proposes a solution to extend LLMs' context window significantly, with code to be released on GitHub. Discussions also covered QLORA, fine-tuning, synthetic data, and custom architectures for LLMs.
1/3/2024: RIP Coqui
sdxl diffusers-0.25 coqui mozilla hugging-face google text-to-speech performance-optimization token-management transformer-architecture image-datasets web-crawling pytorch leaderboards
Coqui, a prominent open source text-to-speech project from the Mozilla ML group, officially shut down. Discussions in the HuggingFace Discord highlighted skepticism about the claimed
3X faster
speed of sdxl, attributing improvements more to techniques like torch.compile
and removal of fp16
and attention
rather than diffusers 0.25 features. Users confirmed that a HuggingFace user token can be used across multiple machines, though distinct tokens are recommended for safety. The Learning Loss Minimization (LLM) Leaderboard briefly experienced issues but was later confirmed operational. A Kaggle notebook was shared demonstrating how to build Transformer architectures from scratch using PyTorch. Additionally, a new image dataset with 15k shoe, sandal, and boot images was introduced for multiclass classification tasks. Explanations about the workings of the Common Crawl web-crawling process were also shared.