vilsonrodrigues
commited on
Commit
•
55863b9
1
Parent(s):
9d2bef0
synchronizing readme
Browse files
README.md
CHANGED
@@ -26,6 +26,8 @@ Resharded version of https://huggingface.co/tiiuae/falcon-7b for low RAM envirom
|
|
26 |
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
|
27 |
* **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.
|
28 |
|
|
|
|
|
29 |
⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
|
30 |
|
31 |
🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother!
|
@@ -34,16 +36,13 @@ Resharded version of https://huggingface.co/tiiuae/falcon-7b for low RAM envirom
|
|
34 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
35 |
import transformers
|
36 |
import torch
|
37 |
-
|
38 |
model = "tiiuae/falcon-7b"
|
39 |
-
|
40 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
41 |
pipeline = transformers.pipeline(
|
42 |
"text-generation",
|
43 |
model=model,
|
44 |
tokenizer=tokenizer,
|
45 |
torch_dtype=torch.bfloat16,
|
46 |
-
trust_remote_code=True,
|
47 |
device_map="auto",
|
48 |
)
|
49 |
sequences = pipeline(
|
@@ -56,7 +55,6 @@ sequences = pipeline(
|
|
56 |
)
|
57 |
for seq in sequences:
|
58 |
print(f"Result: {seq['generated_text']}")
|
59 |
-
|
60 |
```
|
61 |
|
62 |
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
|
@@ -105,16 +103,13 @@ We recommend users of Falcon-7B to consider finetuning it for the specific set o
|
|
105 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
106 |
import transformers
|
107 |
import torch
|
108 |
-
|
109 |
model = "tiiuae/falcon-7b"
|
110 |
-
|
111 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
112 |
pipeline = transformers.pipeline(
|
113 |
"text-generation",
|
114 |
model=model,
|
115 |
tokenizer=tokenizer,
|
116 |
torch_dtype=torch.bfloat16,
|
117 |
-
trust_remote_code=True,
|
118 |
device_map="auto",
|
119 |
)
|
120 |
sequences = pipeline(
|
@@ -127,7 +122,6 @@ sequences = pipeline(
|
|
127 |
)
|
128 |
for seq in sequences:
|
129 |
print(f"Result: {seq['generated_text']}")
|
130 |
-
|
131 |
```
|
132 |
|
133 |
## Training Details
|
|
|
26 |
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
|
27 |
* **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.
|
28 |
|
29 |
+
⚠️ Falcon is now available as a core model in the `transformers` library! To use the in-library version, please install the latest version of `transformers` with `pip install git+https://github.com/ huggingface/transformers.git`, then simply remove the `trust_remote_code=True` argument from `from_pretrained()`.
|
30 |
+
|
31 |
⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
|
32 |
|
33 |
🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother!
|
|
|
36 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
37 |
import transformers
|
38 |
import torch
|
|
|
39 |
model = "tiiuae/falcon-7b"
|
|
|
40 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
41 |
pipeline = transformers.pipeline(
|
42 |
"text-generation",
|
43 |
model=model,
|
44 |
tokenizer=tokenizer,
|
45 |
torch_dtype=torch.bfloat16,
|
|
|
46 |
device_map="auto",
|
47 |
)
|
48 |
sequences = pipeline(
|
|
|
55 |
)
|
56 |
for seq in sequences:
|
57 |
print(f"Result: {seq['generated_text']}")
|
|
|
58 |
```
|
59 |
|
60 |
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
|
|
|
103 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
104 |
import transformers
|
105 |
import torch
|
|
|
106 |
model = "tiiuae/falcon-7b"
|
|
|
107 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
108 |
pipeline = transformers.pipeline(
|
109 |
"text-generation",
|
110 |
model=model,
|
111 |
tokenizer=tokenizer,
|
112 |
torch_dtype=torch.bfloat16,
|
|
|
113 |
device_map="auto",
|
114 |
)
|
115 |
sequences = pipeline(
|
|
|
122 |
)
|
123 |
for seq in sequences:
|
124 |
print(f"Result: {seq['generated_text']}")
|
|
|
125 |
```
|
126 |
|
127 |
## Training Details
|