Pricing
| Type | Price (per 1M) |
|---|---|
| Input tokens | $0.20 |
| Output tokens | $0.20 |
Capabilities
About Llama 2 7B 32K
LLaMA-2-7B-32K is an open-source language model engineered by Together, derived from Meta's LLaMA-2 7B. It boasts a unique extended context length of up to 32,000 tokens, which enhances its ability to tackle tasks involving long-range context, such as multi-document question answering and lengthy text summarization. The model integrates optimizations, including FlashAttention-2, to boost inference and training efficiency. It combines pre-training with instruction tuning data for improved task performance and offers fine-tuning examples for specialized applications, like book summarization or multi-document Q&A. This model marks a substantial progress in the domain of large language models, serving as a potent tool for natural language processing tasks 1311.