LLM ReferenceLLM Reference

Vicuna 13B

About

Vicuna-13B is a finely-tuned open-source chatbot derived from the LLaMA model and developed with around 70,000 user-shared conversations from ShareGPT. Built on the robust Transformer architecture, it features a substantial 13-billion parameter scale. Early evaluations indicate it achieves over 90% of the effectiveness of models like OpenAI's ChatGPT and Google's Bard, surpassing other open-source models such as LLaMA and Stanford Alpaca in various scenarios. Training data includes user conversations initially captured in HTML and converted to markdown for quality filtering. Noteworthy advancements include memory optimizations allowing a context length of 2048 and enhanced multi-turn conversation handling, although it faces challenges in reasoning, mathematics, and factual consistency. It lacks complete optimization for safety and bias reduction. Available in several versions, including a 4-bit quantized edition for efficiency, Vicuna-13B is accessible for non-commercial application.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(2)

Compare all →
ProviderInput (per 1M)Output (per 1M)Type
GCP Vertex AIServerless
Replicate API$0.10$0.50Serverless

Rankings

Specifications

FamilyVicuna
Released2023-10-23
Parameters13B
Context2K
ArchitectureDecoder Only
Specializationgeneral
Trainingfinetuning

Created by

Crowdsourced AI model benchmarking

Berkeley, California, United States
Founded 2023
Website