LLM Reference
Fireworks AI

Nous Capybara 34B on Fireworks AI

Capybara · Nous Research

Provisioned

Pricing

TypePrice (per 1M)
Input tokens$0.90
Output tokens$0.90

Capabilities

VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution

About Nous Capybara 34B

The Nous Capybara 34B, developed by NousResearch, is a cutting-edge large language model built on the Yi-34B architecture. It stands out with its remarkable 200K context length, enabling effective handling of vast input data. Excelling in tasks such as text generation, conversational AI, complex summarization, and information recall, the model is trained on a concise dataset of 20,000 examples, enhanced by the Amplify-Instruct synthesis technique. This transformer-based model provides multiple quantization formats for varied hardware capacities, though it faces challenges like small dataset size and potential accuracy trade-offs. Despite these, it remains a powerful tool for chatbot, content generation, and data analysis applications.

Get Started

Model Specs

Released2024-10-31
Parameters34B
Context200K
ArchitectureDecoder Only

Related Models on Fireworks AI