LLM Reference

OpenHermes 7B

About

OpenHermes 7B is a notable large language model in natural language processing, recognized for fine-tuning on an open-source dataset and employing sample packing to expedite training. It is built on the Llama-2-7b-hf base and trained with around 242,000 entries, including GPT-4-generated content from open AI community datasets. It excludes certain private datasets, focusing on comprehensive sources like GPTeacher and WizardLM. The model's capabilities are evidenced by mixed performance on benchmarks like GPT-4All, BigBench, and TruthfulQA, indicating variable strengths across tasks. While not yet applicable for serverless API deployment, it supports dedicated Inference Endpoints 1.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Specifications

Parameters7B
ArchitectureDecoder Only
Specializationgeneral