Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@
|
|
4 |
|
5 |
## Model description
|
6 |
|
7 |
-
We pre-trained BERT-base model on 152 million sentences from the StackOverflow's 10 year archive. More details of this model can be found in our ACL 2020 paper: [Code and Named Entity Recognition in StackOverflow](https://www.aclweb.org/anthology/2020.acl-main.443/).
|
8 |
|
9 |
|
10 |
|
@@ -15,8 +15,8 @@ We pre-trained BERT-base model on 152 million sentences from the StackOverflow's
|
|
15 |
from transformers import *
|
16 |
import torch
|
17 |
|
18 |
-
tokenizer = AutoTokenizer.from_pretrained("
|
19 |
-
model = AutoModelForTokenClassification.from_pretrained("
|
20 |
|
21 |
```
|
22 |
|
@@ -32,4 +32,4 @@ model = AutoModelForTokenClassification.from_pretrained("jeniya/BERTOverflow")
|
|
32 |
url={https://www.aclweb.org/anthology/2020.acl-main.443/}
|
33 |
year = {2020},
|
34 |
}
|
35 |
-
```
|
|
|
4 |
|
5 |
## Model description
|
6 |
|
7 |
+
We pre-trained BERT-base model on 152 million sentences from the StackOverflow's 10 year archive. More details of this model can be found in our ACL 2020 paper: [Code and Named Entity Recognition in StackOverflow](https://www.aclweb.org/anthology/2020.acl-main.443/).
|
8 |
|
9 |
|
10 |
|
|
|
15 |
from transformers import *
|
16 |
import torch
|
17 |
|
18 |
+
tokenizer = AutoTokenizer.from_pretrained("lanwuwei/BERTOverflow_stackoverflow_github")
|
19 |
+
model = AutoModelForTokenClassification.from_pretrained("lanwuwei/BERTOverflow_stackoverflow_github")
|
20 |
|
21 |
```
|
22 |
|
|
|
32 |
url={https://www.aclweb.org/anthology/2020.acl-main.443/}
|
33 |
year = {2020},
|
34 |
}
|
35 |
+
```
|