LLM ReferenceLLM Reference

Japanese StableLM Gamma 7B

About

The Japanese StableLM Base Gamma 7B is a specialized language model developed for Japanese text processing, leveraging its decoder-only transformer architecture to predict the next word in a sequence effectively. Trained on an extensive dataset of around 100 billion Japanese tokens, including diverse sources like Wikipedia and OSCAR, it supports a wide range of Japanese language styles. While excelling in generating coherent text and handling language modeling tasks, it also holds potential for fine-tuning across various applications such as text classification and machine translation. Despite its strengths, the model requires careful deployment due to potential biases in training data and its limitations in contextual comprehension compared to encoder-decoder models.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(1)

ProviderInput (per 1M)Output (per 1M)Type
Fireworks AI$0.20$0.20Provisioned

Rankings

Specifications

Released2023-04-10
Parameters7B
ArchitectureDecoder Only
Specializationgeneral
Trainingfinetuning

Created by

Open-source generative AI models.

London, United Kingdom
Founded 2020
Website

Providers(1)