LLM ReferenceLLM Reference

ZAYA1-8B

zaya1-8b

Deprecated
Open Source

About

ZAYA1-8B is a small mixture-of-experts (MoE) reasoning model from Zyphra with 8.4B total parameters and approximately 760M active parameters per token. Trained entirely on AMD Instinct MI300X hardware (1,024 GPUs), it incorporates three architectural innovations: Compressed Convolutional Attention (CCA), a novel MLP-based expert router, and learned residual scaling. Post-training uses a four-stage reinforcement learning cascade covering reasoning warmup, adaptive curriculum, large-scale math/code RL with test-time compute traces, and behavioral RL for instruction following. On extended compute (Markovian RSA), it approaches or exceeds frontier models such as DeepSeek-V3.2 and GPT-OSS-120B on mathematics benchmarks. Released under Apache 2.0 as open weights on Hugging Face and as a free serverless endpoint on Zyphra Cloud.

ZAYA1-8B has a 32K-token context window.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode ExecutionPrompt CachingBatch APIAudioFine-tuning

Rankings

Specifications

FamilyZAYA1
Released2026-05-06
Parameters8.4B
Context33K
ArchitectureMoE
Specializationreasoning
LicenseApache 2.0

Created by

Hybrid architecture boosts edge AI efficiency

Palo Alto, California, United States
Founded 2021
Website