LLM ReferenceLLM Reference

Alpaca 7B

About

Alpaca 7B is a language model developed by Stanford University, derived from Meta's LLaMA 7B, designed for instruction-following tasks. It excels in producing coherent, context-sensitive responses and is comparable to OpenAI's text-davinci-003, despite its smaller size and lower training costs. With a transformer-based architecture of 7 billion parameters, it efficiently balances performance and resource needs, suitable for devices like laptops. Trained on 52,000 instruction-based demonstrations, it offers high-quality interaction while facing challenges like hallucination and stereotyping, indicating a need for careful real-world application.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(1)

ProviderInput (per 1M)Output (per 1M)Type
Together AI$0.2$0.2Serverless

Rankings

Specifications

FamilyAlpaca
Released2023-03-31
Parameters7B
ArchitectureDecoder Only
Specializationgeneral
Trainingfinetuning

Created by

Pioneering AI research and computing infrastructure

Stanford, California, United States
Founded 1962
Website

Providers(1)