LLM ReferenceLLM Reference

PPLX Models by Perplexity Labs

This model family is considered obsolete. Consider newer alternatives in Related Model Families below.
11 models2023–2024Up to 32K ctxFrom $0.1/1M input

About

The PPLX family of large language models by Perplexity AI is designed to tackle the challenges of traditional LLMs, such as dealing with outdated information and ensuring response accuracy. It features the core models pplx-7b-online and pplx-70b-online, which have 7 billion and 70 billion parameters, respectively 125. These models can access real-time data from the internet, enabling them to generate responses that are both current and factual 156. Building on open-source models like mistral-7b and llama2-70b, Perplexity's unique fine-tuning and search technology enhance these LLMs' efficacy 156. Evaluations indicate these models yield performance on par with or even surpassing prominent models like GPT-3.5 and Llama 2 in delivering precise and timely answers 157. Additional offerings include the pplx-7b-chat and pplx-70b-chat models, available via API and the Labs platform 156.

Specifications(11 models)

PPLX model specifications comparison
ModelReleasedContextParametersStructured Outputs
Perplexity pplx-7b-chat2024-117BYes
Perplexity pplx-7b-online2024-117BYes
Perplexity pplx-70b-chat2024-1170BYes
Perplexity pplx-70b-online2024-1170BYes
Perplexity pplx-embed-v12024-054BNo

Available From(1 provider)

Pricing

PPLX model pricing by provider
ModelProviderInput / 1MOutput / 1MType
Perplexity pplx-7b-chatPerplexity Labs$0.1$0.1Serverless
Perplexity pplx-7b-onlinePerplexity Labs$0.3$0.3Serverless
Perplexity pplx-70b-chatPerplexity Labs$0.4$0.4Serverless
Perplexity pplx-70b-onlinePerplexity Labs$0.6$0.6Serverless

Frequently Asked Questions

What is PPLX used for?
PPLX is used for structured outputs. The family description and listed model capabilities point to those workloads as the best fit.
How does PPLX compare to Claude 4.7?
PPLX by Perplexity Labs is strongest where you need structured outputs, while Claude 4.7 by Anthropic is the closest related family to check for vision and multimodal work. PPLX has 11 listed variants and reaches up to 32K context, while Claude 4.7 reaches up to 1M context, so compare the specs and pricing tables before choosing a production model.
Which PPLX model should I use?
For the lowest listed input price, start with Perplexity pplx-7b-chat through Perplexity Labs at $0.1/1M input tokens. For the most capable/latest local choice, evaluate Perplexity pplx-7b-chat with structured outputs.

Models(11)