LLM ReferenceLLM Reference

GLM-5

About

Flagship open-weight foundation model from Zhipu AI with 744B parameters (40B active per token) in Mixture of Experts architecture. Trained on 28.5T tokens using DeepSeek Sparse Attention on Huawei Ascend hardware. Achieves state-of-the-art performance on coding and agentic benchmarks (SWE-bench Verified: 77.8%). Supports autonomous planning, multi-step tool use, and self-correction.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(5)

Compare all →
ProviderInput (per 1M)Output (per 1M)Type
Fireworks AI$1.00$3.20Serverless
OpenRouter$0.72$2.3Serverless
Together AI$1$3.2Serverless
GCP Vertex AI$1$3.2Serverless
NVIDIA NIMServerless

Benchmark Scores(1)

BenchmarkScoreVersionSource
SWE-bench Pro38.6DAT-1778

Rankings

Specifications

FamilyGLM-5
Released2026-02-11
Parameters744B total, 40B active
Context200k
ArchitectureMixture of Experts
Specializationgeneral
LicenseMIT
Trainingfinetuning

Created by

Chinese AI research lab developing GLM language models.

Beijing, China
Founded 2019
Website