LLM ReferenceLLM Reference

Orca 2 7B

About

Orca 2 7B is a large language model developed by Microsoft, focusing on reasoning tasks and providing precise single-turn responses. It is a fine-tuned version of the LLaMA-2 architecture, trained on a synthetic dataset with enhanced reasoning capabilities, moderated by Microsoft Azure content filters. While adept at handling reasoning over user-provided data, reading comprehension, math problem-solving, and text summarization, it is not optimized for chat applications without further fine-tuning. Orca 2 shows strong performance in zero-shot settings but shares some LLMs' common limitations, including biases and the potential for generating misleading content. Designed primarily for research, its use in production requires careful assessment to mitigate potential harms or biases.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Providers(1)

ProviderInput (per 1M)Output (per 1M)Type
Microsoft Foundry$0.52$0.67Provisioned

Benchmark Scores(2)

BenchmarkScoreVersionSource
HumanEval28.4pass@1Open LLM Leaderboard
Massive Multitask Language Understanding66.55-shotOpen LLM Leaderboard

Rankings

Specifications

FamilyOrca 2
Released2023-11-21
Parameters7B
ArchitectureDecoder Only
Specializationgeneral
Trainingfinetuning

Created by

Advancing the state-of-the-art in AI and computing.

Redmond, Washington, United States
Founded 1991
Website

Providers(1)