
ELYZA Japanese Llama 2
About
The ELYZA Japanese Llama 2 family of LLMs is an adaptation of Meta's Llama 2, tailored specifically for enhanced Japanese language processing. By leveraging Llama 2's extensive English pre-training, ELYZA effectively integrates additional pre-training on Japanese text. This strategy optimizes performance by utilizing a robust base model and augmenting it with Japanese language capabilities. Available in various parameter sizes, such as 7B and 13B, these models include special "instruct" versions designed for instruction-following tasks. They are built to handle diverse natural language processing applications such as text generation, question answering, and translation, while also being accessible for both commercial and research purposes. The "fast" variants prioritize speed, whereas "instruct" versions are fine-tuned for superior instruction adherence.