1. EachPod

GLM-4.5: Open Agentic, Reasoning, and Coding Foundation Models

Author
Neural Intelligence Network
Published
Mon 11 Aug 2025
Episode Link
https://podcasters.spotify.com/pod/show/neuralintelpod/episodes/GLM-4-5-Open-Agentic--Reasoning--and-Coding-Foundation-Models-e36n1rq

The source introduces GLM-4.5, a new open-source Mixture-of-Experts (MoE) large language model, along with a compact version, GLM-4.5-Air. Developed by Zhipu AI and Tsinghua University, these models are designed for agentic, reasoning, and coding (ARC) tasks, exhibiting strong performance in these areas despite having fewer parameters than many competitors. The paper details their multi-stage training process, which includes extensive pre-training and post-training with expert model iteration and reinforcement learning, enabling hybrid reasoning modes for both direct and deliberative responses. Furthermore, the source presents comprehensive evaluation results across numerous benchmarks, showcasing GLM-4.5's capabilities in general chat, logical reasoning, and complex translation, alongside insights into their RL infrastructure and data synthesis methods.

Share to: