Update README.md
Browse files
README.md
CHANGED
@@ -38,6 +38,9 @@ The following hyperparameters were used during training:
|
|
38 |
- lr_scheduler_warmup_ratio: 0.03
|
39 |
- num_epochs: 2.0
|
40 |
|
|
|
|
|
|
|
41 |
## Prompt Template
|
42 |
|
43 |
```
|
@@ -95,7 +98,7 @@ The license on this model does not constitute legal advice. We are not responsib
|
|
95 |
|
96 |
## Organizations developing the model
|
97 |
|
98 |
-
The NeuralChat team with members from Intel/
|
99 |
|
100 |
## Useful links
|
101 |
* Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
|
|
|
38 |
- lr_scheduler_warmup_ratio: 0.03
|
39 |
- num_epochs: 2.0
|
40 |
|
41 |
+
### Training sample code
|
42 |
+
Here is the sample code to reproduce the model: [Sample Code](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/examples/finetuning/finetune_neuralchat_v3/README.md).
|
43 |
+
|
44 |
## Prompt Template
|
45 |
|
46 |
```
|
|
|
98 |
|
99 |
## Organizations developing the model
|
100 |
|
101 |
+
The NeuralChat team with members from Intel/DCAI/AISE/AIPT. Core team members: Kaokao Lv, Liang Lv, Chang Wang, Wenxin Zhang, Xuhui Ren, and Haihao Shen.
|
102 |
|
103 |
## Useful links
|
104 |
* Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
|