
Qwen1.5
About
The Qwen1.5 family is an advanced series of large language models (LLMs) developed by Alibaba Cloud, serving as a beta precursor to the Qwen2 series 134. This collection includes eight model sizes, scaling from 0.5 billion to 72 billion parameters, and features a 14-billion parameter Mixture of Experts (MoE) model. Available in both base and fine-tuned chat variants, these models offer key advancements such as enhanced human-aligned responses, stronger multilingual support across varied languages, and an extended context length capability of up to 32,768 tokens. Designed for user convenience, the Qwen1.5 models integrate effortlessly with popular frameworks like Hugging Face Transformers, vLLM, and llama.cpp. Additionally, there is a specialized CodeQwen1.5 model focused on code generation with support for up to 64K tokens 913.