LLM Reference

Vicuna 7B V1.5

About

Vicuna 7B V1.5, created by LMSYS, is a sophisticated language model with 7 billion parameters, rooted in the transformer architecture. It originates from fine-tuning Llama 2 using a dataset of around 125,000 user-shared conversations from ShareGPT, leveraging supervised instruction fine-tuning to boost its conversational prowess. Known for generating coherent and contextually apt responses, this model is apt for natural language processing research, machine learning, and chatbots. However, with a context limitation of 4,096 tokens, its ability to manage lengthy dialogues is somewhat constrained. Despite commendable performance on multiple benchmarks, it achieves about 90% of GPT-4's quality, highlighting a scope for improvement in response accuracy and quality 124.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(1)

ProviderInput (per 1M)Output (per 1M)Type
Together AI API$0.2$0.2
Serverless

Specifications

FamilyVicuna
Parameters7B
Context2K
ArchitectureDecoder Only
Specializationgeneral