eluzhnica commited on
Commit
f5352ac
1 Parent(s): 06f1e12

Add disclaimer

Browse files
Files changed (1) hide show
  1. README.md +8 -0
README.md CHANGED
@@ -11,6 +11,14 @@ inference: false
11
 
12
  # MPT-7B-Instruct
13
 
 
 
 
 
 
 
 
 
14
  MPT-7B-Instruct is a model for short-form instruction following.
15
  It is built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
16
  * License: _CC-By-SA-3.0_
 
11
 
12
  # MPT-7B-Instruct
13
 
14
+ This is the MPT-7B-Instruct but with added support to finetune using peft (tested with qlora). It is not finetuned further, the weights are the same as the original MPT-7B-Instruct.
15
+
16
+ I have not traced through the whole huggingface stack to see if this is working correctly but it does finetune with qlora and outputs are reasonable.
17
+ Inspired by implementations here https://huggingface.co/cekal/mpt-7b-peft-compatible/commits/main
18
+ https://huggingface.co/mosaicml/mpt-7b/discussions/42.
19
+
20
+ The original description for MosaicML team below:
21
+
22
  MPT-7B-Instruct is a model for short-form instruction following.
23
  It is built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
24
  * License: _CC-By-SA-3.0_