All tags
Topic: "gpu-scheduling"
Gemini Ultra is out, to mixed reviews
gemini-ultra gemini-advanced solar-10.7b openhermes-2.5-mistral-7b subformer billm google openai mistral-ai hugging-face multi-gpu-support training-data-contamination model-merging model-alignment listwise-preference-optimization high-performance-computing parameter-sharing post-training-quantization dataset-viewer gpu-scheduling fine-tuning vram-optimization
Google released Gemini Ultra as a paid tier for "Gemini Advanced with Ultra 1.0" following the discontinuation of Bard. Reviews noted it is "slightly faster/better than ChatGPT" but with reasoning gaps. The Steam Deck was highlighted as a surprising AI workstation capable of running models like Solar 10.7B. Discussions in AI communities covered topics such as multi-GPU support for OSS Unsloth, training data contamination from OpenAI outputs, ethical concerns over model merging, and new alignment techniques like Listwise Preference Optimization (LiPO). The Mojo programming language was praised for high-performance computing. In research, the Subformer model uses sandwich-style parameter sharing and SAFE for efficiency, and BiLLM introduced 1-bit post-training quantization to reduce resource use. The OpenHermes dataset viewer tool was launched, and GPU scheduling with Slurm was discussed. Fine-tuning challenges for models like OpenHermes-2.5-Mistral-7B and VRAM requirements were also topics of interest.