
zai
glm-4.5
GLM-4.5 is a 355B parameter Mixture-of-Experts model with 32B active parameters. Features 128K context window, hybrid reasoning modes (Thinking/Non-Thinking), and generation speeds over 100 tokens per second. First among open-source models globally.
Provider:
zai
Model type:
chat
Location:
rest
Context Window
131072
Intelligence Rating
Speed Rating
Cost Efficiency Rating
Pricing
$
0.6
Input tokens per million
$
2.2
Output tokens per million
Features
Tool Calling
Supported
JSON Mode
Supported
