LLM Reference

Japanese StableLM Gamma 7B

About

The Japanese StableLM Base Gamma 7B is a specialized language model developed for Japanese text processing, leveraging its decoder-only transformer architecture to predict the next word in a sequence effectively. Trained on an extensive dataset of around 100 billion Japanese tokens, including diverse sources like Wikipedia and OSCAR, it supports a wide range of Japanese language styles. While excelling in generating coherent text and handling language modeling tasks, it also holds potential for fine-tuning across various applications such as text classification and machine translation. Despite its strengths, the model requires careful deployment due to potential biases in training data and its limitations in contextual comprehension compared to encoder-decoder models.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(1)

ProviderInput (per 1M)Output (per 1M)Type
Fireworks AI Platform
Provisioned

Specifications

Parameters7B
ArchitectureDecoder Only
Specializationgeneral