LLM Reference

Phi-2

About

Phi-2 is a compact language model by Microsoft endowed with 2.7 billion parameters and part of their Phi series. It shows formidable capabilities in reasoning and language understanding, outshining much larger models, even those with up to 25 times more parameters. Phi-2's training utilized a vast and diverse dataset of 1.4 trillion tokens, incorporating high-quality synthetic data and curated web content to bolster its common sense reasoning and general knowledge. Interestingly, despite lacking fine-tuning via reinforcement learning from human feedback (RLHF), it exhibits enhanced safety features and reduced bias. This makes Phi-2 a particularly useful asset in natural language processing research and development 127.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(5)

ProviderInput (per 1M)Output (per 1M)Type
Replicate APIServerless
Azure OpenAIProvisioned
Cloudflare Workers AIServerless
Together AI API$0.1$0.1Serverless
Fireworks AI PlatformProvisioned

Benchmark Scores(4)

BenchmarkScoreVersionSource
Google-Proof Q&A41.2diamondresearch
HellaSwag85.910-shotresearch
HumanEval59.7pass@1research
Massive Multitask Language Understanding68.35-shotresearch

Specifications

FamilyPhi-2
Released2023-12-12
Parameters2.7B
ArchitectureDecoder Only
Specializationgeneral