Multilingual Constitutional AI
Collection
Blog: https://sites.google.com/view/multilingual-constitutional-ai
•
12 items
•
Updated
This model is a fine-tuned version of mistralai/Mistral-Nemo-Base-2407 on the pbevan11/ultrafeedback_binarized_multilingual dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.6158 | 0.9455 | 13 | 1.3800 |
1.1061 | 1.9636 | 27 | 1.1854 |
0.9071 | 2.8364 | 39 | 1.1750 |