LLM Reference

Starling LM 7B Beta

About

Starling LM 7B Beta is an open-source large language model crafted by Nexusflow, leveraging a 7-billion parameter transformer architecture tailored for conversational AI. Fine-tuned with Reinforcement Learning from AI Feedback (RLAIF), it aims to enhance helpfulness and minimize harm. Built on the foundation of the Openchat-3.5-0106 and Mistral-7B-v0.1 models, it utilizes the berkeley-nest/Nectar ranking dataset, Nexusflow/Starling-RM-34B reward model, and Proximal Policy Optimization (PPO) strategy. Achieving an improved MT Bench score of 8.12, its capabilities span engaging conversations, informative responses, and tasks like content and code generation. While it shows strong performance among 7B models, verbose outputs and strict adherence to a provided chat template are notable considerations. Licensed under Apache-2.0 with restrictions against competing with OpenAI, it continues to offer robust functionality within its calibrated framework.

Capabilities

MultimodalFunction CallingTool UseJSON Mode

Providers(1)

ProviderInput (per 1M)Output (per 1M)Type
Cloudflare Workers AI
Serverless

Specifications

FamilyStarling
ArchitectureDecoder Only
Specializationgeneral