Models on Baseten API
14 models available · Baseten
| Model | Input (per 1M) | Output (per 1M) | Context | |
|---|---|---|---|---|
| Camel 5B | — | — | — | |
| CodeLlama 7B | — | — | 100K | |
| Llama 2 7B Chat | — | — | 4K | |
| Llama 3 70B Instruct | — | — | 8K | |
| Llama 3 8B Instruct | — | — | 8K | |
| Mistral 7B v0.1 | — | — | 8K | |
| Mixtral 8x22B v0.1 | — | — | 64K | |
| Mixtral 8x7B | — | — | 32K | |
| NSQL 350M | — | — | — | |
| Phi-3 Mini 128K | — | — | 128K | |
| Phi-3 Mini 4k | — | — | 4K | |
| Stable Code Alpha 3B | — | — | 16K | |
| WizardLM 7B | — | — | — | |
| Zephyr 7B Alpha | — | — | — |
About Baseten API
The AI platform offers a comprehensive suite of features designed to streamline the development and deployment of machine learning models. At its core, the platform supports open-source models, allowing developers to leverage existing frameworks and tools for their AI applications. This flexibility is coupled with rapid deployment capabilities, enabling organizations to quickly bring their models into production environments. The platform's architecture is built for scalability, accommodating fluctuating workloads and user demands without compromising performance. A standout feature is its high-speed inference capabilities, crucial for applications that require real-time data processing and decision-making. Cost-effectiveness is a key advantage of the platform, implementing a pay-as-you-go model that minimizes initial investments while optimizing resource utilization. The platform also boasts flexible deployment options, allowing users to deploy models across various environments including cloud, on-premises, or edge devices. This versatility empowers organizations to tailor their deployment strategies to specific needs and existing infrastructure. By combining these features, the platform provides a robust solution that enables businesses to fully harness the potential of AI while maintaining control over costs and deployment logistics.
Full provider profile →