All tags
Topic: "dataset-sharing"
Genesis: Generative Physics Engine for Robotics (o1-mini version)
o1 o1-preview gpt-4o claude-3.5-sonnet gemini-2.0-pro llama-3-3b llama-3-70b openai google-deepmind meta-ai-fair hugging-face function-calling structured-outputs vision performance-benchmarks sdk webrtc reasoning math code-generation transformer-architecture model-training humanoid-robots search model-efficiency dataset-sharing aidan_mclau sundarpichai adcock_brett
OpenAI launched the o1 model API featuring function calling, structured outputs, vision support, and developer messages, achieving 60% fewer reasoning tokens than its preview. The model excels in math and code with a 0.76 LiveBench Coding score, outperforming Sonnet 3.5. Beta SDKs for Go and Java and WebRTC support with 60% lower prices were also released. Google Gemini 2.0 Pro (Gemini Exp 1206) deployment accelerated, showing improved coding, math, and reasoning performance. Meta AI FAIR introduced research on training transformers directly on raw bytes using dynamic entropy-based patching. Commercial humanoid robots were successfully deployed by an industry player. Hugging Face researchers demonstrated that their 3B Llama model can outperform the 70B Llama model on MATH-500 accuracy using search techniques, highlighting efficiency gains with smaller models. Concerns about reproducibility and domain-specific limitations were noted.
Karpathy emerges from stealth?
mistral-7b mixtral-8x7b zephyr-7b gpt-4 llama-2 intel mistral-ai audiogen thebloke tokenization quantization model-optimization fine-tuning model-merging computational-efficiency memory-optimization retrieval-augmented-generation multi-model-learning meta-reasoning dataset-sharing open-source ethical-ai community-collaboration andrej-karpathy
Andrej Karpathy released a comprehensive 2-hour tutorial on tokenization, detailing techniques up to GPT-4's tokenizer and noting the complexity of Llama 2 tokenization with SentencePiece. Discussions in AI Discord communities covered model optimization and efficiency, focusing on quantization of models like Mistral 7B and Zephyr-7B to reduce memory usage for consumer GPUs, including Intel's new weight-only quantization algorithm. Efforts to improve computational efficiency included selective augmentation reducing costs by 57.76% and memory token usage versus kNN for Transformers. Challenges in hardware compatibility and software issues were shared, alongside fine-tuning techniques such as LoRA and model merging. Innovative applications of LLMs in retrieval-augmented generation (RAG), multi-model learning, and meta-reasoning were explored. The community emphasized dataset sharing, open-source releases like SDXL VAE encoded datasets and Audiogen AI codecs, and ethical AI use with censorship and guardrails. Collaboration and resource sharing remain strong in these AI communities.
1/16/2024: ArtificialAnalysis - a new model/host benchmark site
mixtral hermes-2-mixtral openchat-7b byte-mistral nous-research nvidia hugging-face summarization fine-tuning byte-level-tokenization multimodality inference-speed-optimization dataset-sharing quantization swyx gabriel_syme manojbh carsonpoole fullstack6209
Artificial Analysis launched a new models and hosts comparison site, highlighted by swyx. Nous Research AI Discord discussed innovative summarization techniques using NVIDIA 3090 and 2080ti GPUs for processing around 100k tokens, and adapting prompts for smaller models like OpenChat 7B. The availability of Hermes 2 Mixtral on Huggingface's HuggingChat was noted, alongside fine-tuning challenges with Mixtral using Axolotl. Discussions included byte-level tokenization experiments with Byte Mistral, multimodal training on COCO image bytes, and inference speed improvements using vllm and llama.cpp. Calls for transparency in data sharing and open-sourcing the Hermes 2 Mixtral dataset were emphasized, with comparisons of dpo and sft methods and quantized LLM use on M1 MacBook Pro.