LLM Reference
Phi-2

Phi-2

Microsoft ResearchMITOpen Source

About

The Phi family of language models, developed by Microsoft Research, comprises several small language models (SLMs) designed to achieve high performance despite their relatively small size. These models employ a Transformer architecture and are trained on a blend of synthetic and web datasets 12. Emphasizing the quality of training data, the models prioritize "textbook-quality" information to enhance reasoning and understanding capabilities 1. The Phi series includes Phi-1, Phi-1.5, and Phi-2, with each version incorporating advancements in model scaling and data curation 1. Phi-2, the latest in the series, has 2.7 billion parameters and exhibits state-of-the-art performance among base models with fewer than 13 billion parameters across various benchmarks 12. Notably, Phi-2 has not been subjected to reinforcement learning from human feedback (RLHF) or instruction fine-tuning 12, and the models are accessible to researchers to aid exploration of safety challenges and other language model developments 2.

Models(1)

Details

LicenseMIT
Models1