LLM Reference

Falcon 40B

About

Falcon 40B is a leading open-source large language model developed by the Technology Innovation Institute in Abu Dhabi, featuring a causal decoder-only architecture with 40 billion parameters. It stands out with its use of rotary positional embeddings, multi-query attention, and FlashAttention, enhancing its contextual understanding and processing efficiency. Trained on 1 trillion tokens using the enriched RefinedWeb dataset, Falcon 40B excels in various natural language processing tasks, ranging from text generation to language translation and question answering. It supports multiple languages and is open under the Apache 2.0 license, promoting both research and commercial use. The model efficiently utilizes standard hardware, requiring around 85-100 GB of memory for inference, setting a benchmark for performance and scalability in its category.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(4)

ProviderInput (per 1M)Output (per 1M)Type
Azure OpenAI
Provisioned
GCP Vertex AI
Serverless
Alibaba Cloud PAI-EAS
Serverless
Replicate API
Serverless

Specifications

FamilyFalcon
Released2023-11-28
Parameters40B
ArchitectureDecoder Only
Specializationgeneral