All tags
Topic: "evaluation-methods"
not much happened today
mythos anthropic openai langchain nous-research cybersecurity sandboxing reinforcement-learning agent-architecture memory-management model-deployment software-security evaluation-methods kimmonismus paul_cal gneubig kentonvarda boazbaraktcs ylecun deanwball hwchase17 vtrivedy10 sarahcat21 aijoey
Anthropic's Mythos and OpenAI's upcoming restricted cyber-capable models are central to recent discussions, with debates on their security realism and evaluation methods. LangChain's Deep Agents deploy introduces an open memory, model-agnostic agent harness architecture emphasizing open protocols and memory ownership. Sandboxes are gaining prominence as a core infrastructure for reinforcement learning, with labs running up to 100K concurrent sandboxes aiming for 1M. The Hermes Agent by Nous continues to gain traction with new integrations and features like a web-based HUD and token cost tracking.
not much happened today + AINews Podcast?
superforecaster-ai llama-3 reflection-70b glean sambanova cerebras stanford google apple hugging-face lmsys prompt-engineering research-ideas inference-speed retrieval-augmented-generation evaluation-methods visual-intelligence on-device-ai model-performance benchmarking novelty-detection danhendrycks benjamin-clavie bclavie bindureddy swyx borismpower corbtt drjimfan clementdelangue rohanpaul_ai
Glean doubled its valuation again. Dan Hendrycks' Superforecaster AI generates plausible election forecasts with interesting prompt engineering. A Stanford study found that LLM-generated research ideas are statistically more novel than those by expert humans. SambaNova announced faster inference for llama-3 models, surpassing Cerebras. Benjamin Clavie gave a notable talk on retrieval-augmented generation techniques. Strawberry is reported to launch in two weeks. Google Illuminate offers AI-generated podcast discussions about papers and books. Apple unveiled new AI features in iOS 18, including visual intelligence and improved Siri, with on-device and cloud processing for camera-based event additions. The Reflection 70B model sparked controversy over performance claims. Experts highlighted the unreliability of traditional benchmarks like MMLU and HumanEval, recommending alternative evaluation methods such as LMSys Chatbot Arena and Hugging Face's open-sourced Lighteval suite. The AI research community continues to explore AI's role in generating novel research ideas and improving benchmarking.