All tags
Topic: "workflow-optimization"
Qwen-Image 2.0 and Seedance 2.0
gpt-5.2 gpt-5.3-codex claude-opus-4.6 gemini-3-pro qwen-image-2.0 seedance-2.0 openai langchain-ai anthropic google-deepmind mistral-ai alibaba bytedance moonshot agentic-sandboxes multi-model-orchestration server-side-compaction coding-agent-ux long-running-agents model-release text-to-video image-generation parallel-execution funding git-compatible-database token-efficiency workflow-optimization hwchase17 nabbilkhan sydneyrunkle joecuevasjr pierceboggan reach_vb gdb ashtom
OpenAI advances its Responses API for multi-hour agent workflows with features like server-side compaction, hosted containers, and Skills API, alongside upgrading Deep Research to GPT-5.2 and adding connectors. Discussions around sandbox design highlight a shift towards sandbox-as-a-tool architectures, with LangChain enhancing its deepagents v0.4 with pluggable sandbox backends. Coding agent UX evolves with multi-model orchestration involving Claude Opus 4.6, GPT-5.3-Codex, and Gemini 3 Pro. EntireHQ raised $60M seed funding for a Git-compatible database capturing code intent and agent context. In model releases, Alibaba Qwen launched Qwen-Image-2.0 emphasizing 2K resolution and 1K-token prompts for unified generation and editing. ByteDance's Seedance 2.0 marks a significant leap in text-to-video quality, while Moonshot's Kimi introduces an Agent Swarm with up to 100 sub-agents and 4.5× faster parallel execution.
ChatGPT starts testing ads on free tier + new $8/mo Go plan in the US
chatgpt-go codex openai ollama ads monetization memory agent-orchestration human-in-the-loop cli-tools context-length workflow-optimization sama sam_altman fidjissimo scaling01 tomwarren embirico adamdotdev ollama thsottiaux lateinteraction dbreunig
OpenAI announced the ChatGPT Go tier at $8/month with ads testing in the US free tier, emphasizing that ads will not influence responses and will be clearly labeled. The update includes memory improvements and a "very fast Codex" feature teased by Sam Altman. The Codex CLI ecosystem now supports open-weight models with improved context length. Discussions highlight the importance of human-in-the-loop for reliability in agent orchestration and file interface improvements over traditional retrieval-augmented generation.
Claude Skills grows: Open Standard, Directory, Org Admin
claude-skills gpt-5.2-codex gemini-3-flash functiongemma t5gemma-2 anthropic openai google-deepmind hugging-face agentic-ai fine-tuning long-context tool-calling on-device-ai multimodality security workflow-optimization sama gregbrockman philschmid
Claude Skills are gaining significant traction since their launch in October, with a milestone of 100k views in one day for the Claude Skills talk, signaling growing adoption and importance. Announcements include org admin support, a new Skills Directory, and the move to an open standard named Agent Skills. In frontier model launches, OpenAI released GPT-5.2-Codex, touted as the best agentic coding model with improvements in native compaction, long-context reliability, and tool-calling, emphasizing real-world security impacts. Google DeepMind introduced Gemini 3 Flash, focusing on speed as a product feature impacting workflows and user engagement, alongside FunctionGemma and T5Gemma 2, emphasizing on-device deployment, fine-tuning, and multimodality.
Andrew likes Agents
gpt-3.5 gpt-4 cyberrealistic_v40 platypus-xl sdxl-lightning openai stability-ai agents human-eval-benchmark fine-tuning local-llm-deployment inference-speed image-generation lora upscaling workflow-optimization andrew-ng lilian-weng emad
Andrew Ng's The Batch writeup on Agents highlighted the significant improvement in coding benchmark performance when using an iterative agent workflow, with GPT-3.5 wrapped in an agent loop achieving up to 95.1% correctness on HumanEval, surpassing GPT-4 zero-shot at 67.0%. The report also covers new developments in Stable Diffusion models like Cyberrealistic_v40, Platypus XL, and SDXL Lightning for Naruto-style image generation, alongside innovations in LoRA and upscaling techniques. Discussions on local LLM deployment and optimization focus on hardware setups and finetuning strategies for efficient inference and multi-user serving. Emad's departure from Stability AI and new Sora videos from OpenAI were also noted.