LLM Reference

Vicuna 7B V1.5 16K

About

Vicuna-7B-v1.5-16K is a chat assistant LLM created by LMSYS, fine-tuned from Llama 2 with supervised instruction fine-tuning and linear RoPE scaling. It features a 16K context window, allowing it to handle longer interactions than typical LLMs. The model was trained using approximately 125,000 conversations from ShareGPT.com and follows a transformer architecture, proving effective for conversational AI tasks by engaging in dialogues, responding to inquiries, and providing explanations. Although it performs strongly in benchmarks and human preference tests, detailed quantitative results are variably reported. Notably, it requires substantial computational resources and may exhibit training data biases.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Specifications

FamilyVicuna
Parameters7B
Context16K
ArchitectureDecoder Only
Specializationgeneral