Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## About
|
2 |
+
|
3 |
+
This is the 8-bit quantized version of Facebook's mbart model.
|
4 |
+
|
5 |
+
According to the abstract, MBART is a sequence-to-sequence denoising auto-encoder pretrained on large-scale monolingual corpora in many languages using the BART objective. mBART is one of the first methods for pretraining a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstructing parts of the text.
|
6 |
+
|
7 |
+
This model was contributed by [valhalla](https://huggingface.co/valhalla). The Authors’ code can be found [here](https://github.com/facebookresearch/fairseq/tree/main/examples/mbart)
|
8 |
+
|
9 |
+
## Usage info
|
10 |
+
|
11 |
+
Install requred packages
|
12 |
+
|
13 |
+
```!pip install -U bitsandbytes sentencepiece```
|
14 |
+
|
15 |
+
then import model from 🤗 transformers library
|
16 |
+
|
17 |
+
```python
|
18 |
+
from transformers import MBartTokenizer, AutoModelForSeq2SeqLM, pipeline
|
19 |
+
|
20 |
+
tokenizer = AutoTokenizer.from_pretrained("Ransaka/mbart-large-cc25-8bit")
|
21 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("Ransaka/mbart-large-cc25-8bit", device_map='auto')
|
22 |
+
|
23 |
+
# you'll get an output like this if import succeed
|
24 |
+
# ===================================BUG REPORT===================================
|
25 |
+
# Welcome to bitsandbytes. For bug reports, please run
|
26 |
+
|
27 |
+
# python -m bitsandbytes
|
28 |
+
|
29 |
+
# and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
|
30 |
+
# ================================================================================
|
31 |
+
# bin /opt/conda/lib/python3.7/site-packages/bitsandbytes/libbitsandbytes_cuda113_nocublaslt.so
|
32 |
+
# CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
|
33 |
+
# CUDA SETUP: Highest compute capability among GPUs detected: 6.0
|
34 |
+
# CUDA SETUP: Detected CUDA version 113
|
35 |
+
# CUDA SETUP: Loading binary /opt/conda/lib/python3.7/site-packages/bitsandbytes/libbitsandbytes_cuda113_nocublaslt.so...
|
36 |
+
|
37 |
+
#create summarization pipeline
|
38 |
+
text = """Right now, major tech firms are clamouring to replicate the runaway success of ChatGPT,
|
39 |
+
the generative AI chatbot developed by OpenAI using its GPT-3 large language model.
|
40 |
+
Much like potential game-changers of the past, such as cloud-based Software as a Service
|
41 |
+
(SaaS) platforms or blockchain technology (emphasis on potential), established companies
|
42 |
+
and start-ups alike are going public with LLMs and ChatGPT alternatives in fear of being left behind.
|
43 |
+
"""
|
44 |
+
pipe = pipeline('text2text-generation', model=model, tokenizer=tokenizer)
|
45 |
+
pipe(text)
|
46 |
+
#[{'generated_text': 'theore, major tech are clamouring to replicate the generative AI chatbot developed by OpenAI using its AI'}]
|
47 |
+
|
48 |
+
```
|