LLM ReferenceLLM Reference

Orca 2 13B

About

Orca 2 13B, developed by Microsoft, is a large language model designed primarily for research purposes. It is a fine-tuned version of the LLaMA-2 base model, focusing on enhanced reasoning capabilities in smaller language models. This is achieved through training on a synthetic dataset specifically created to improve reasoning skills. Orca 2 13B excels in tasks such as reading comprehension, math problem-solving, and text summarization. However, it is not optimized for chat applications and requires fine-tuning for specific tasks. The model demonstrates strong performance in zero-shot settings but shares common LLM limitations, such as potential biases and a lack of contextual understanding. It is primarily suitable for research and not recommended for deployment without further evaluation. 124 3 7 8.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(1)

ProviderInput (per 1M)Output (per 1M)Type
Microsoft Foundry$0.81$0.94Provisioned

Benchmark Scores(2)

BenchmarkScoreVersionSource
HumanEval35.2pass@1Open LLM Leaderboard
Massive Multitask Language Understanding70.85-shotOpen LLM Leaderboard

Rankings

Specifications

FamilyOrca 2
Released2023-11-21
Parameters13B
ArchitectureDecoder Only
Specializationgeneral
Trainingfinetuning

Created by

Advancing the state-of-the-art in AI and computing.

Redmond, Washington, United States
Founded 1991
Website

Providers(1)