Pricing
| Type | Price (per 1M) |
|---|---|
| Input tokens | $0.60 |
| Output tokens | $0.60 |
Capabilities
VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution
About Nous Hermes 2 Mixtral 8x7B
Mixtral MoE variant of Hermes trained on 1M+ GPT-4 entries for content generation and customer service. Available in quantized formats (GGUF, GPTQ, AWQ) for flexible deployment.
Model Specs
Released2023-12-12
Parameters8x7B
ArchitectureMixture of Experts
Knowledge cutoff2023-12