All tags
Model: "chameleon-7b"
Nothing much happened today
chameleon-7b chameleon-30b xlam-1b gpt-3.5 phi-3-mini mistral-7b-v3 huggingface truth_terminal microsoft apple openai meta-ai-fair yi axolotl amd salesforce function-calling multimodality model-releases model-updates model-integration automaticity procedural-memory text-image-video-generation
HuggingFace released a browser-based timestamped Whisper using transformers.js. A Twitter bot by truth_terminal became the first "semiautonomous" bot to secure VC funding. Microsoft and Apple abruptly left the OpenAI board amid regulatory scrutiny. Meta is finalizing a major upgrade to Reddit comments addressing hallucination issues. The Yi model gained popularity on GitHub with 7.4K stars and 454 forks, with potential integration with Axolotl for pregeneration and preprocessing. AMD technologies enable household/small business AI appliances. Meta released Chameleon-7b and Chameleon-30b models on HuggingFace supporting unified text and image tokenization. Salesforce's xLAM-1b model outperforms GPT-3.5 in function calling despite its smaller size. Anole pioneered open-source multimodal text-image-video generation up to 720p 144fps. Phi-3 Mini expanded from 3.8B to 4.7B parameters with function calling, competing with Mistral-7b v3. "System 2 distillation" in humans relates to automaticity and procedural memory.
There's Ilya!
chameleon-7b chameleon-34b deepseek-coder-v2 gpt-4-turbo claude-3-opus voco-llama safe-superintelligence-inc openai anthropic meta deepseek google-deepmind parallel-decoding code-generation quantization training-dynamics vision benchmarks datasets image-captioning reasoning memory-optimization ilya-sutskever jan-leike ylecun akhaliq philschmid rohanpaul_ai mervenoyann fchollet
Ilya Sutskever has co-founded Safe Superintelligence Inc shortly after leaving OpenAI, while Jan Leike moved to Anthropic. Meta released new models including Chameleon 7B and 34B with mixed-modal input and unified token space quantization. DeepSeek-Coder-V2 shows code capabilities comparable to GPT-4 Turbo, supporting 338 programming languages and 128K context length. Consistency Large Language Models (CLLMs) enable parallel decoding generating multiple tokens per step. Grokked Transformers demonstrate reasoning through training dynamics affecting memory formation and generalization. VoCo-LLaMA compresses vision tokens with LLMs improving video temporal correlation understanding. The BigCodeBench benchmark evaluates LLMs on 1,140 coding tasks across 139 Python libraries, topped by DeepSeek-Coder-V2 and Claude 3 Opus. PixelProse is a large 16M image-caption dataset with reduced toxicity.
Gemini launches context caching... or does it?
nemotron llama-3-70b chameleon-7b chameleon-34b gemini-1.5-pro deepseek-coder-v2 gpt-4-turbo claude-3-opus gemini-1.5-pro nvidia meta-ai-fair google deepseek hugging-face context-caching model-performance fine-tuning reinforcement-learning group-relative-policy-optimization large-context model-training coding model-release rohanpaul_ai _philschmid aman-sanger
Nvidia's Nemotron ranks #1 open model on LMsys and #11 overall, surpassing Llama-3-70b. Meta AI released Chameleon 7B/34B models after further post-training. Google's Gemini introduced context caching, offering a cost-efficient middle ground between RAG and finetuning, with a minimum input token count of 33k and no upper limit on cache duration. DeepSeek launched DeepSeek-Coder-V2, a 236B parameter model outperforming GPT-4 Turbo, Claude-3-Opus, and Gemini-1.5-Pro in coding tasks, supporting 338 programming languages and extending context length to 128K. It was trained on 6 trillion tokens using the Group Relative Policy Optimization (GRPO) algorithm and is available on Hugging Face with a commercial license. These developments highlight advances in model performance, context caching, and large-scale coding models.