LLM Reference

Alpaca 7B on Together AI

Alpaca · Stanford ArtificiaI Intelligence Laboratory (SAIL)

Serverless

Pricing

TypePrice (per 1M)
Input tokens$0.20
Output tokens$0.20

Capabilities

VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution

About Alpaca 7B

Alpaca 7B is a language model developed by Stanford University, derived from Meta's LLaMA 7B, designed for instruction-following tasks. It excels in producing coherent, context-sensitive responses and is comparable to OpenAI's text-davinci-003, despite its smaller size and lower training costs. With a transformer-based architecture of 7 billion parameters, it efficiently balances performance and resource needs, suitable for devices like laptops. Trained on 52,000 instruction-based demonstrations, it offers high-quality interaction while facing challenges like hallucination and stereotyping, indicating a need for careful real-world application.

Get Started

Model Specs

Released2023-03-31
Parameters7B
ArchitectureDecoder Only