AI Glossary
large language model
LLM
Definition
A large language model (LLM) is a Transformer-based neural network with billions of parameters, trained on vast datasets for next-token prediction, enabling broad NLP tasks and emergent abilities. LLMs dominate AI due to their human-like text generation and adaptability across domains.