TFG
Collection
Datasets and models leveraged and developed during my final degree work (TFG). Info and code can be found at https://github.com/enriquesaou/tfg-lm-qa
•
18 items
•
Updated
•
2
This model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on a MRQA sample. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.7895 | 0.0267 | 100 | 1.8179 |
1.7531 | 0.0533 | 200 | 1.7730 |
1.7567 | 0.08 | 300 | 1.7651 |
Base model
microsoft/Phi-3-mini-4k-instruct