About
The Koala family of large language models (LLMs) originates from the Berkeley Artificial Intelligence Research (BAIR) lab 1011. These models are based on Meta's LLaMA, but are fine-tuned on dialogue data collected from the web, focusing on high-quality interactions with advanced LLMs like ChatGPT 1011. The Koala models are designed for academic research and are available in various sizes, including 7B and 13B parameter versions 39. They are released in formats compatible with Hugging Face and other frameworks, allowing for easy use and further research 39. While offering competitive performance to larger closed-source models, Koala models are still considered research prototypes and have limitations in terms of safety, reliability, and content 11. The models were trained using data from sources like ShareGPT and the HC3 dataset 911.
