LLM Reference

WizardLM 7B

About

WizardLM 7B is a large language model designed for effective instruction-following and has 7 billion parameters. It is developed using the Evol-Instruct method to generate a wide range of open-domain instructions and is based on the architecture of the Llama 7B model with merged delta weights for enhanced performance. The model is available in various forms, including quantized versions optimized for different hardware applications such as GGML for CPU and GPTQ for GPU inference. Some iterations are uncensored, lacking filtering for potentially harmful content, thus placing the onus on users regarding the content generated. WizardLM 7B excels in natural language tasks like text generation and question answering, although uncensored versions may create inappropriate outputs due to the absence of guardrails. The training corpus presumably includes extensive text data encompassing instructions and dialogues, and performance can vary with different quantization levels, balancing speed and accuracy.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(1)

ProviderInput (per 1M)Output (per 1M)Type
Baseten API
Serverless

Specifications

FamilyWizardLM
Parameters7B
ArchitectureDecoder Only
Specializationgeneral