SeaLLM 7B
About
SeaLLM 7B is a multilingual large language model family specifically developed for Southeast Asian languages. Known for its strong multilingual task performance, it often surpasses larger models like GPT-3.5 in specific benchmarks. The models employ a transformer architecture and have undergone various training and fine-tuning processes, including using different base models like Mistral-7B, gemma-7b, and Llama-2. They excel in areas such as math reasoning, instruction following, and function calling while addressing cultural nuances of SEA languages. Despite its capabilities, the model's performance can be subject to the quality and bias of its training data, and it may occasionally produce inaccurate information.
Capabilities
MultimodalFunction CallingTool UseJSON Mode