All tags  
  Topic: "in-context-learning"
 not much happened today 
   glm-4.5  glm-4.5-air  qwen3-coder  qwen3-235b  kimi-k2  grok-imagine  wan-2.2  smollm3  figure-01  figure-02  vitpose++  chatgpt   zhipu-ai  alibaba  moonshot-ai  x-ai  figure  openai  runway  mlx  ollama  deeplearningai   model-releases  model-performance  moe  image-generation  video-generation  pose-estimation  robotics  training-code-release  interactive-learning  in-context-learning   yuchenj_uw  corbtt  reach_vb  ollama  deeplearningai  gdb  sama  c_valenzuelab  adcock_brett  skalskip92  loubnabenallal1  hojonathanho  ostrisai  
 Chinese AI labs have released powerful open-source models like GLM-4.5 and GLM-4.5-Air from Zhipu AI, Qwen3 Coder and Qwen3-235B from Alibaba, and Kimi K2 from Moonshot AI, highlighting a surge in permissively licensed models. Zhipu AI's GLM-4.5 is a 355B parameter MoE model competitive with Claude 4 Opus and Gemini 2.5 Pro. Alibaba's Qwen3 Coder shows strong code generation performance with a low edit failure rate, while Moonshot AI's Kimi K2 is a 1 trillion-parameter MoE model surpassing benchmarks like LiveCodeBench. In video and image generation, xAI launched Grok Imagine, and Wan2.2 impressed with innovative image-to-video generation. Robotics advances include Figure's Figure-01 and Figure-02 humanoid robots and ViTPose++ for pose estimation in basketball analysis. SmolLM3 training and evaluation code was fully released under Apache 2.0. OpenAI introduced Study Mode in ChatGPT to enhance interactive learning, and Runway rolled out Runway Aleph, a new in-context video model for multi-task visual generation. The community notes a competitive disadvantage for organizations avoiding these Chinese open-source models. "Orgs avoiding these models are at a significant competitive disadvantage," noted by @corbtt.