LLM Reference

SeaLLM 7B V2.5

About

SeaLLM-7B-v2.5 is a cutting-edge multilingual large language model tailored for Southeast Asian languages. It surpasses its predecessor, SeaLLM-13B, by providing enhanced performance despite having just half its parameter size. The model excels in various tasks such as text generation and question answering, demonstrating a profound comprehension of language and context. Built on the Gemma-7b architecture, it underwent extensive supervised fine-tuning, contributing to its impressive multilingual proficiency across ten SEA languages. SeaLLM-7B-v2.5 not only outperforms GPT-3.5 on several SEA multilingual benchmarks but also flaunts superior capabilities in reasoning tasks like GSM8K and MATH, particularly in non-Latin languages, making it a versatile tool for diverse applications. Its open-source nature encourages collaborative innovation and further advancement in AI models.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(1)

ProviderInput (per 1M)Output (per 1M)Type
NVIDIA NIM
Provisioned

Specifications

FamilySeaLLM 2
Parameters7B
ArchitectureDecoder Only
Specializationgeneral