LLM Reference

Alpaca 7B

About

Alpaca 7B is a language model developed by Stanford University, derived from Meta's LLaMA 7B, designed for instruction-following tasks. It excels in producing coherent, context-sensitive responses and is comparable to OpenAI's text-davinci-003, despite its smaller size and lower training costs. With a transformer-based architecture of 7 billion parameters, it efficiently balances performance and resource needs, suitable for devices like laptops. Trained on 52,000 instruction-based demonstrations, it offers high-quality interaction while facing challenges like hallucination and stereotyping, indicating a need for careful real-world application.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(1)

ProviderInput (per 1M)Output (per 1M)Type
Together AI API$0.2$0.2
Serverless

Specifications

FamilyAlpaca
Released2023-03-31
ArchitectureDecoder Only
Specializationgeneral