--- base_model: - Qwen/Qwen2.5-72B tags: - roleplay - storywriting - qwen2.5 - finetune - transformers - pytorch --- # Zeus Labs ~ Chronos-Platinum-72B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630417380907b9a115c6aa9f/G05mAhqcp4S_WBfE2vBLl.png) Qwen 2.5 72B base model, trained for two epochs on the Chronos Divergence dataset using ChatML. It works well for roleplaying and storywriting as well as general assistant tasks. ## Instruct Template This model uses `ChatML` - below is an example. It is a preset in many frontends. ``` <|im_start|>system You are a helpful assistant<|im_end|> <|im_start|>user Hello there!<|im_end|> <|im_start|>assistant Hi! I'm an AI assistant, designed to help people like you with all sorts of tasks. Is there anything you need help with?<|im_end|> <|im_start|>user I was wondering how transformers work?<|im_end|> <|im_start|>assistant ``` ## Quantization Please note that we tested this model with a 5.0bpw EXL2 quant. Results are not expected to be the same when going below this quanitzation. Thanks to our model quanters! #### LlamaCPP (GGUF) [bartowski](https://huggingface.co/bartowski/Chronos-Platinum-72B-GGUF) #### Exllama2 TODO! #### FP8 TODO! ## Sampling Settings Here are some settings that work well with this model: ``` Coming soon ``` ## Credit Thank you to my team consisting of [@ToastyPigeon](https://huggingface.co/ToastyPigeon), [@Fizzarolli](https://huggingface.co/Fizzarolli), and myself [@elinas](https://huggingface.co/elinas). Additional thanks to [@AlpinDale](https://huggingface.co/AlpinDale) and the rest of the PygmalionAI team for graciously providing the compute to finetune this model! Thank you to [anthracite-org](https://huggingface.co/anthracite-org) as well for sponsoring this model. ## Additional DetailsĀ  We used a combination of provided logs and WizardLM evol both cleaned up and de-slopped. Thanks to Anthropic and OpenAI for the models used to generate synthetic and partially synthetic data to train this model. Thanks Elon Musk for being based enough to train AI that compares to the top models. If you have any questions or concerns, please post in the community tab. DISCLAIMER: Outputs generated by the model are not reflective of our views.