LLM ReferenceLLM Reference

Falcon 40B

About

Falcon 40B is a leading open-source large language model developed by the Technology Innovation Institute in Abu Dhabi, featuring a causal decoder-only architecture with 40 billion parameters. It stands out with its use of rotary positional embeddings, multi-query attention, and FlashAttention, enhancing its contextual understanding and processing efficiency. Trained on 1 trillion tokens using the enriched RefinedWeb dataset, Falcon 40B excels in various natural language processing tasks, ranging from text generation to language translation and question answering. It supports multiple languages and is open under the Apache 2.0 license, promoting both research and commercial use. The model efficiently utilizes standard hardware, requiring around 85-100 GB of memory for inference, setting a benchmark for performance and scalability in its category.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(4)

Compare all →
ProviderInput (per 1M)Output (per 1M)Type
Microsoft Foundry$1.54$1.77Provisioned
GCP Vertex AIServerless
Alibaba Cloud PAI-EASServerless
Replicate API$0.65$2.75Serverless

Rankings

Specifications

FamilyFalcon
Released2023-11-28
Parameters40B
ArchitectureDecoder Only
Specializationgeneral
Trainingfinetuning

Created by

Innovative open-source AI for global impact

Abu Dhabi, United Arab Emirates
Founded 2019
Website