Chronos 70B
About
The Chronos 70B model, an advanced iteration based on the Llama v2 Base, excels in applications such as chat, roleplay, and storywriting due to its enhanced reasoning and logic capabilities. It is capable of producing lengthy, coherent outputs, with a token context length of up to 4096, facilitated by its specialized training data. While the unquantized 70B parameter version is not feasible for consumer GPUs, practical, quantized alternatives are available, offering various trade-offs between speed and accuracy. These versions are available in formats like GPTQ and AWQ. Utilizing strategies like TSMix and KernelSynth during training improves its data diversity. The model uses Alpaca formatting for optimal performance. However, its accuracy and efficiency haven't yet matched those of traditional time series forecasting models. It operates under a non-commercial cc-by-nc-4.0 license, permitting only non-commercial use 147.