LLM ReferenceLLM Reference

Nous Hermes 2 Mixtral 8x7B

About

Mixtral MoE variant of Hermes trained on 1M+ GPT-4 entries for content generation and customer service. Available in quantized formats (GGUF, GPTQ, AWQ) for flexible deployment.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(3)

Compare all →
ProviderInput (per 1M)Output (per 1M)Type
OctoAI API$0.15$0.15Serverless
Fireworks AI$0.50$0.50Provisioned
Together AI$0.6$0.6Serverless

Rankings

Specifications

FamilyHermes 2
Released2023-12-12
Parameters8x7B
ArchitectureMixture of Experts
Knowledge cutoff2023-12
Specializationgeneral
Trainingfinetuning

Created by

Human-centric AI model innovation

New York, New York, United States
Founded 2023
Website