ZAYA1-8B
zaya1-8b
About
ZAYA1-8B is a small mixture-of-experts (MoE) reasoning model from Zyphra with 8.4B total parameters and approximately 760M active parameters per token. Trained entirely on AMD Instinct MI300X hardware (1,024 GPUs), it incorporates three architectural innovations: Compressed Convolutional Attention (CCA), a novel MLP-based expert router, and learned residual scaling. Post-training uses a four-stage reinforcement learning cascade covering reasoning warmup, adaptive curriculum, large-scale math/code RL with test-time compute traces, and behavioral RL for instruction following. On extended compute (Markovian RSA), it approaches or exceeds frontier models such as DeepSeek-V3.2 and GPT-OSS-120B on mathematics benchmarks. Released under Apache 2.0 as open weights on Hugging Face and as a free serverless endpoint on Zyphra Cloud.
ZAYA1-8B has a 32K-token context window.