LLM Reference
Fireworks AI

Nous Hermes 2 Mixtral 8x7B on Fireworks AI

Hermes 2 · Nous Research

Provisioned

Pricing

TypePrice (per 1M)
Input tokens$0.50
Output tokens$0.50

Capabilities

VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution

About Nous Hermes 2 Mixtral 8x7B

Mixtral MoE variant of Hermes trained on 1M+ GPT-4 entries for content generation and customer service. Available in quantized formats (GGUF, GPTQ, AWQ) for flexible deployment.

Get Started

Model Specs

Released2023-12-12
Parameters8x7B
ArchitectureMixture of Experts
Knowledge cutoff2023-12

Related Models on Fireworks AI