Qwen1.5-32B
About
Qwen1.5-32B is a robust large language model from the Qwen1.5 series, serving as a beta version of Qwen2. It is a transformer-based, decoder-only model, pretrained on an extensive dataset. Key features include its 32 billion parameters and a support for up to 32K context length, alongside multilingual capabilities. The model demonstrates substantial performance enhancements over its predecessor, especially in chat applications, using advanced techniques like SwiGLU activation and group query attention. While there's a base version, the chat variant is fine-tuned for conversational AI. It's accessible on Hugging Face and other platforms for diverse applications.
Capabilities
MultimodalFunction CallingTool UseJSON Mode
Providers(2)
| Provider | Input (per 1M) | Output (per 1M) | Type | |
|---|---|---|---|---|
| Together AI API | $0.8 | $0.8 | Serverless | |
| Replicate API | — | — | Serverless |