LLM ReferenceLLM Reference

MythoMist 7B

About

MythoMist 7B is an experimental LLM built on the Mistral architecture, focusing on reducing the prevalence of words linked to negative connotations in ChatGPT roleplaying data. It incorporates elements from various models like Neural-chat-7b-v3-1, Synatra-7B-v0.3-RP, and others, merged through a custom algorithm for benchmarking. With around 7.24 billion parameters, it offers quantized formats like GGUF and GPTQ for optimized hardware usage, and supports tools like llama.cpp and text-generation-webui. The model is designed for use with an Alpaca-style prompt format.

Capabilities

VisionMultimodalReasoningFunction CallingTool UseStructured OutputsCode Execution

Rankings

Specifications

FamilyMytho
Released2023-10-27
Parameters7B
ArchitectureDecoder Only
Specializationgeneral
Trainingfinetuning

Created by

Exploring roleplaying dynamics in AI research

N/A
Founded N/A
Website