File size: 1,085 Bytes
bedc7f6
f9cff86
 
bedc7f6
 
 
 
f9cff86
bedc7f6
 
 
f9cff86
bedc7f6
 
f9cff86
 
 
 
 
 
192280f
bedc7f6
f9cff86
bedc7f6
 
 
f9cff86
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
---
library_name: transformers  
tags: [bloom-560m, lora]
---

# Model Card for Model ID

This model card describes a  transformers model based on the Bloom 560m architecture, fine-tuned with LORA (Linear Regression for Out-of-Distribution Adaptation). This model is intended for advanced users familiar with large language models and LORA.

### Model Description

This is a Bloom 560m model fine-tuned with LORA. Bloom 560m is a factual language model from Adept AI, trained on a massive dataset of text and code. LORA is a technique for adapting a pre-trained model to new data without retraining the entire model.


- **Developed by:** Tayyib Ul Hassan
<!-- - **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed] -->
- **Model type:** Causal LLM
- **Language(s) (NLP):** English
<!-- - **License:** [More Information Needed] -->
- **Finetuned from model:** Bloom 560m (original model by Adept AI)

### Model Sources

<!-- Provide the basic links for the model. -->

- **Paper:** [Link to paper!](https://arxiv.org/abs/2211.05100)