sileod commited on
Commit
9a77395
1 Parent(s): 9d545db

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -317,8 +317,8 @@ tags:
317
  # Model Card for DeBERTa-v3-small-tasksource-nli
318
 
319
 
320
- [DeBERTa-v3-small](https://hf.co/microsoft/deberta-v3-small) with context length of 1680 fine-tuned on tasksource for 250k steps. I oversampled long NLI tasks (ConTRoL, doc-nli).
321
- Training data include helpsteer v1/v2, logical reasoning tasks (FOLIO, FOL-nli, LogicNLI...), OASST, hh/rlhf, linguistics oriented NLI tasks, tasksource-dpo, fact verification tasks.
322
 
323
  This model is suitable for long context NLI or as a backbone for reward models or classifiers fine-tuning.
324
 
 
317
  # Model Card for DeBERTa-v3-small-tasksource-nli
318
 
319
 
320
+ [DeBERTa-v3-small](https://hf.co/microsoft/deberta-v3-small) with context length of 1680 tokens fine-tuned on tasksource for 250k steps. I oversampled long NLI tasks (ConTRoL, doc-nli).
321
+ Training data include HelpSteer v1/v2, logical reasoning tasks (FOLIO, FOL-nli, LogicNLI...), OASST, hh/rlhf, linguistics oriented NLI tasks, tasksource-dpo, fact verification tasks.
322
 
323
  This model is suitable for long context NLI or as a backbone for reward models or classifiers fine-tuning.
324