LLM Reference

Vicuna 7B V1.5 on Together AI

Vicuna · LMSYS Org

Serverless

Pricing

TypePrice (per 1M)
Input tokens$0.20
Output tokens$0.20

Capabilities

VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution

About Vicuna 7B V1.5

Vicuna 7B V1.5, created by LMSYS, is a sophisticated language model with 7 billion parameters, rooted in the transformer architecture. It originates from fine-tuning Llama 2 using a dataset of around 125,000 user-shared conversations from ShareGPT, leveraging supervised instruction fine-tuning to boost its conversational prowess. Known for generating coherent and contextually apt responses, this model is apt for natural language processing research, machine learning, and chatbots. However, with a context limitation of 4,096 tokens, its ability to manage lengthy dialogues is somewhat constrained. Despite commendable performance on multiple benchmarks, it achieves about 90% of GPT-4's quality, highlighting a scope for improvement in response accuracy and quality 124.

Get Started

Model Specs

Released2023-10-23
Parameters7B
Context2K
ArchitectureDecoder Only

Related Models on Together AI