All tags
Topic: "batch-processing"
not much happened today
claude-3.5-sonnet opencoder anthropic microsoft sambanova openai langchain llamaindex multi-agent-systems natural-language-interfaces batch-processing harmful-content-detection secret-management retrieval-augmented-generation error-analysis memory-management web-scraping autonomous-agents sophiamyang tom_doerr omarsar0 _akhaliq andrewyng giffmana
This week in AI news, Anthropic launched Claude Sonnet 3.5, enabling desktop app control via natural language. Microsoft introduced Magentic-One, a multi-agent system built on the AutoGen framework. OpenCoder was unveiled as an AI-powered code cookbook for large language models. SambaNova is sponsoring a hackathon with prizes up to $5000 for building real-time AI agents. Sophiamyang announced new Batch and Moderation APIs with 50% lower cost and multi-dimensional harmful text detection. Open-source tools like Infisical for secret management, CrewAI for autonomous agent orchestration, and Crawlee for web scraping were released. Research highlights include SCIPE for error analysis in LLM chains, Context Refinement Agent for improved retrieval-augmented generation, and MemGPT for managing LLM memory. The week also saw a legal win for OpenAI in the RawStory copyright case, affirming that facts used in LLM training are not copyrightable.
12/25/2023: Nous Hermes 2 Yi 34B for Christmas
nous-hermes-2 yi-34b nucleusx yayi-2 ferret teknim nous-research apple mixtral deepseek qwen huggingface wenge-technology quantization model-optimization throughput-metrics batch-processing parallel-decoding tensor-parallelization multimodality language-model-pretraining model-benchmarking teknium carsonpoole casper_ai pradeep1148 osanseviero metaldragon01
Teknium released Nous Hermes 2 on Yi 34B, positioning it as a top open model compared to Mixtral, DeepSeek, and Qwen. Apple introduced Ferret, a new open-source multimodal LLM. Discussions in the Nous Research AI Discord focused on AI model optimization and quantization techniques like AWQ, GPTQ, and AutoAWQ, with insights on proprietary optimization and throughput metrics. Additional highlights include the addition of NucleusX Model to transformers, a 30B model with 80 MMLU, and the YAYI 2 language model by Wenge Technology trained on 2.65 trillion tokens. "AutoAWQ outperforms vLLM up to batch size 8" was noted, and proprietary parallel decoding and tensor parallelization across GPUs were discussed for speed improvements.