Update README.md
Browse files
README.md
CHANGED
@@ -5,4 +5,92 @@ datasets:
|
|
5 |
language:
|
6 |
- en
|
7 |
pipeline_tag: text-generation
|
8 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
language:
|
6 |
- en
|
7 |
pipeline_tag: text-generation
|
8 |
+
---
|
9 |
+
# Model Card for Model ID
|
10 |
+
|
11 |
+
Basically this is Q-bert/MetaMath-Cybertron-Starling finetuned using DPO on intel/orca-dpo pairs. Think of it like Neural-MetaMath-Cybertron-Starling, except for I wanted to call it Go Bruins to support my school.
|
12 |
+
|
13 |
+
|
14 |
+
## Model Details
|
15 |
+
|
16 |
+
### Model Description
|
17 |
+
|
18 |
+
Trained for 200 steps on DPO intel/orca-dpo pairs using unsloth. Feel free to hit me up at rwitz_ on Discord if you have any concerns about the model / want to give me a thumbs up.
|
19 |
+
|
20 |
+
|
21 |
+
|
22 |
+
- **Developed by:** Ryan Witzman
|
23 |
+
- **Model type:** Mistral Fine-tune
|
24 |
+
- **Language(s) (NLP):** English
|
25 |
+
- **License:** MIT
|
26 |
+
- **Finetuned from model [optional]:** Q-bert/MetaMath-Cybertron-Starling
|
27 |
+
|
28 |
+
### Model Sources [optional]
|
29 |
+
|
30 |
+
<!-- Provide the basic links for the model. -->
|
31 |
+
|
32 |
+
- **Demo [optional]:** Coming Soon!
|
33 |
+
|
34 |
+
## Uses
|
35 |
+
|
36 |
+
This model CAN AND WILL OUTPUT NSFW AND ILLEGAL CONTENT. USE AT YOUR OWN RISK.
|
37 |
+
### Direct Use
|
38 |
+
|
39 |
+
Works best in oobabooga in chat-mode. I believe it works best if you load in float16.
|
40 |
+
|
41 |
+
|
42 |
+
|
43 |
+
|
44 |
+
### Out-of-Scope Use
|
45 |
+
|
46 |
+
Do not use the model for illegal content or harrassing other people. Please be respectful :)
|
47 |
+
|
48 |
+
## Bias, Risks, and Limitations
|
49 |
+
|
50 |
+
As always, this is an AI model and is not suited to give professional advice or help in times of crisis.
|
51 |
+
|
52 |
+
### Recommendations
|
53 |
+
|
54 |
+
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
|
55 |
+
|
56 |
+
## How to Get Started with the Model
|
57 |
+
|
58 |
+
Use the code below to get started with the model.
|
59 |
+
```
|
60 |
+
from transformers import pipeline
|
61 |
+
|
62 |
+
model_name = "rwitz/go-bruins"
|
63 |
+
inference_pipeline = pipeline('text-generation', model=model_name)
|
64 |
+
|
65 |
+
input_text = "Your input text goes here"
|
66 |
+
|
67 |
+
output = inference_pipeline(input_text)
|
68 |
+
|
69 |
+
print(output)
|
70 |
+
```
|
71 |
+
|
72 |
+
## Training Details
|
73 |
+
|
74 |
+
### Training Data
|
75 |
+
|
76 |
+
https://huggingface.co/datasets/Intel/orca_dpo_pairs
|
77 |
+
|
78 |
+
### Training Procedure
|
79 |
+
|
80 |
+
DPO(Direct Preference Optimization)
|
81 |
+
|
82 |
+
|
83 |
+
## Evaluation
|
84 |
+
|
85 |
+
Coming Soon! I have found through some tests that this model has improved upon the Q-Bert model, which is currently SOTA for 7B models as of December 8, 2023. I expect this model to achieve higher scores.
|
86 |
+
|
87 |
+
|
88 |
+
## Model Card Authors [optional]
|
89 |
+
|
90 |
+
Ryan Witzman
|
91 |
+
|
92 |
+
## Model Card Contact
|
93 |
+
|
94 |
+
rwitz_ on Discord
|
95 |
+
|
96 |
+
|