LLM ReferenceLLM Reference

Snorkel

1 model2023From $0.2/1M input

About

The Snorkel AI team is noted for its innovative approach to developing large language models (LLMs) that are specifically aligned to cater to niche requirements and enhance performance on targeted tasks. Their unique method leverages smaller, specialized reward models to guide alignment, moving beyond the conventional reliance on extensive datasets. A prominent example is the Snorkel-Mistral-PairRM-DPO model, which refines the Mistral-7B-Instruct-v0.2 model through iterative Direct Preference Optimization (DPO) alongside a Pairwise Reward Model (PairRM) 2. This advanced refinement process involves generating multiple prompt responses, reranking with PairRM, followed by further honing with DPO based on evaluative rankings 2. The outcomes showcase enhanced performance metrics on benchmarks like AlpacaEval 2.0 2. Snorkel AI's broader strategies emphasize a data-centric paradigm, focusing on optimized data labeling and annotation to boost model accuracy while minimizing training expenses 3. They equip enterprises with tools such as Snorkel Flow and Snorkel Custom to streamline data operations 3. Further details about the comprehensive range of Snorkel LLMs suggest a commitment to developing custom-fit models with improved training and operational efficiency.

Specifications(1 models)

Snorkel model specifications comparison
ModelReleased
Snorkel Mistral PairRM2023-11

Available From(2 providers)

Pricing

Snorkel model pricing by provider
ModelProviderInput / 1MOutput / 1MType
Snorkel Mistral PairRMTogether AI$0.2$0.2Serverless
Snorkel Mistral PairRMFireworks AI$0.2$0.2Provisioned

Frequently Asked Questions

What is Snorkel?
The Snorkel AI team is noted for its innovative approach to developing large language models (LLMs) that are specifically aligned to cater to niche requirements and enhance performance on targeted tasks. Their unique method leverages smaller, specialized reward models to guide alignment, moving beyond the conventional reliance on extensive datasets. A prominent example is the Snorkel-Mistral-PairRM-DPO model, which refines the Mistral-7B-Instruct-v0.2 model through iterative Direct Preference Optimization (DPO) alongside a Pairwise Reward Model (PairRM) 2. This advanced refinement process involves generating multiple prompt responses, reranking with PairRM, followed by further honing with DPO based on evaluative rankings 2. The outcomes showcase enhanced performance metrics on benchmarks like AlpacaEval 2.0 2. Snorkel AI's broader strategies emphasize a data-centric paradigm, focusing on optimized data labeling and annotation to boost model accuracy while minimizing training expenses 3. They equip enterprises with tools such as Snorkel Flow and Snorkel Custom to streamline data operations 3. Further details about the comprehensive range of Snorkel LLMs suggest a commitment to developing custom-fit models with improved training and operational efficiency.
How many models are in the Snorkel family?
The Snorkel family contains 1 model.
What is the latest Snorkel model?
The latest model is Snorkel Mistral PairRM, released in 2023-11.
How much does Snorkel cost?
Snorkel models are available at $0.2/1M input tokens through providers like Together AI.

Models(1)