tayyibsupercool
commited on
Commit
•
192280f
1
Parent(s):
f9cff86
Update README.md
Browse files
README.md
CHANGED
@@ -7,10 +7,6 @@ tags: [bloom-560m, lora]
|
|
7 |
|
8 |
This model card describes a transformers model based on the Bloom 560m architecture, fine-tuned with LORA (Linear Regression for Out-of-Distribution Adaptation). This model is intended for advanced users familiar with large language models and LORA.
|
9 |
|
10 |
-
|
11 |
-
|
12 |
-
## Model Details
|
13 |
-
|
14 |
### Model Description
|
15 |
|
16 |
This is a Bloom 560m model fine-tuned with LORA. Bloom 560m is a factual language model from Adept AI, trained on a massive dataset of text and code. LORA is a technique for adapting a pre-trained model to new data without retraining the entire model.
|
@@ -22,7 +18,7 @@ This is a Bloom 560m model fine-tuned with LORA. Bloom 560m is a factual languag
|
|
22 |
- **Model type:** Causal LLM
|
23 |
- **Language(s) (NLP):** English
|
24 |
<!-- - **License:** [More Information Needed] -->
|
25 |
-
- **Finetuned from model
|
26 |
|
27 |
### Model Sources
|
28 |
|
|
|
7 |
|
8 |
This model card describes a transformers model based on the Bloom 560m architecture, fine-tuned with LORA (Linear Regression for Out-of-Distribution Adaptation). This model is intended for advanced users familiar with large language models and LORA.
|
9 |
|
|
|
|
|
|
|
|
|
10 |
### Model Description
|
11 |
|
12 |
This is a Bloom 560m model fine-tuned with LORA. Bloom 560m is a factual language model from Adept AI, trained on a massive dataset of text and code. LORA is a technique for adapting a pre-trained model to new data without retraining the entire model.
|
|
|
18 |
- **Model type:** Causal LLM
|
19 |
- **Language(s) (NLP):** English
|
20 |
<!-- - **License:** [More Information Needed] -->
|
21 |
+
- **Finetuned from model:** Bloom 560m (original model by Adept AI)
|
22 |
|
23 |
### Model Sources
|
24 |
|