
Platypus2
About
The Platypus2 family of large language models (LLMs) stands out for its impressive performance on the HuggingFace Open LLM Leaderboard. Developed by Boston University, these models utilize the LLaMA-2 transformer architecture and leverage a specialized dataset named Open-Platypus, emphasizing STEM and logical reasoning. Through sophisticated fine-tuning techniques like Low-Rank Adaptation (LoRA) and Parameter-Efficient Fine-Tuning (PEFT), Platypus2 achieves remarkable results with reduced training data and computational demands compared to its peers. Available in various quantized formats, such as GGUF, these models offer efficient inference across diverse hardware platforms. While excelling in English, caution is advised regarding their application in other languages and safety protocols prior to deployment. Notably, the 13B parameter version has been further enhanced by merging with models like OpenOrca to amplify its abilities 136.