Pricing
| Type | Price (per 1M) |
|---|---|
| Input tokens | $0.80 |
| Output tokens | $0.80 |
Capabilities
VisionMultimodalReasoningFunction CallingTool UseJSON ModeCode Execution
About Qwen1.5-32B
Qwen1.5-32B is a robust large language model from the Qwen1.5 series, serving as a beta version of Qwen2. It is a transformer-based, decoder-only model, pretrained on an extensive dataset. Key features include its 32 billion parameters and a support for up to 32K context length, alongside multilingual capabilities. The model demonstrates substantial performance enhancements over its predecessor, especially in chat applications, using advanced techniques like SwiGLU activation and group query attention. While there's a base version, the chat variant is fine-tuned for conversational AI. It's accessible on Hugging Face and other platforms for diverse applications.
Model Specs
Released2024-02-05
Parameters32B
ArchitectureDecoder Only