LLM Reference

Vicuna 13B 16K

About

The Vicuna 13B v1.5 16K model by LMSYS is an advanced conversational AI built on the transformer architecture, fine-tuned from Llama 2. It features 13 billion parameters and handles up to 16,000 tokens in context, making it suitable for diverse tasks like chatbots and content generation. Trained on 125,000 conversations from ShareGPT, it excels in generating coherent text and engaging dialogue but may struggle with domain-specific knowledge and nuanced context. Despite potential biases, it represents a significant step forward in conversational AI.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(1)

ProviderInput (per 1M)Output (per 1M)Type
GCP Vertex AI
Serverless

Specifications

FamilyVicuna
Released2023-10-23
Parameters13B
Context16K
ArchitectureDecoder Only
Specializationgeneral