LLM Reference

Vicuna 13B

About

Vicuna-13B is a finely-tuned open-source chatbot derived from the LLaMA model and developed with around 70,000 user-shared conversations from ShareGPT. Built on the robust Transformer architecture, it features a substantial 13-billion parameter scale. Early evaluations indicate it achieves over 90% of the effectiveness of models like OpenAI's ChatGPT and Google's Bard, surpassing other open-source models such as LLaMA and Stanford Alpaca in various scenarios. Training data includes user conversations initially captured in HTML and converted to markdown for quality filtering. Noteworthy advancements include memory optimizations allowing a context length of 2048 and enhanced multi-turn conversation handling, although it faces challenges in reasoning, mathematics, and factual consistency. It lacks complete optimization for safety and bias reduction. Available in several versions, including a 4-bit quantized edition for efficiency, Vicuna-13B is accessible for non-commercial application.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(2)

ProviderInput (per 1M)Output (per 1M)Type
GCP Vertex AI
Serverless
Replicate API
Serverless

Specifications

FamilyVicuna
Released2023-10-23
Parameters13B
Context2K
ArchitectureDecoder Only
Specializationgeneral