Codebee commited on
Commit
c1f40ab
•
1 Parent(s): 88c9682

Upload 10 files

Browse files
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: zh
3
+ ---
4
+
5
+ # Bert-base-chinese
6
+
7
+ ## Table of Contents
8
+ - [Model Details](#model-details)
9
+ - [Uses](#uses)
10
+ - [Risks, Limitations and Biases](#risks-limitations-and-biases)
11
+ - [Training](#training)
12
+ - [Evaluation](#evaluation)
13
+ - [How to Get Started With the Model](#how-to-get-started-with-the-model)
14
+
15
+
16
+ ## Model Details
17
+
18
+ ### Model Description
19
+
20
+ This model has been pre-trained for Chinese, training and random input masking has been applied independently to word pieces (as in the original BERT paper).
21
+
22
+ - **Developed by:** HuggingFace team
23
+ - **Model Type:** Fill-Mask
24
+ - **Language(s):** Chinese
25
+ - **License:** [More Information needed]
26
+ - **Parent Model:** See the [BERT base uncased model](https://huggingface.co/bert-base-uncased) for more information about the BERT base model.
27
+
28
+ ### Model Sources
29
+ - **Paper:** [BERT](https://arxiv.org/abs/1810.04805)
30
+
31
+ ## Uses
32
+
33
+ #### Direct Use
34
+
35
+ This model can be used for masked language modeling
36
+
37
+
38
+
39
+ ## Risks, Limitations and Biases
40
+ **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
41
+
42
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
43
+
44
+
45
+ ## Training
46
+
47
+ #### Training Procedure
48
+ * **type_vocab_size:** 2
49
+ * **vocab_size:** 21128
50
+ * **num_hidden_layers:** 12
51
+
52
+ #### Training Data
53
+ [More Information Needed]
54
+
55
+ ## Evaluation
56
+
57
+ #### Results
58
+
59
+ [More Information Needed]
60
+
61
+
62
+ ## How to Get Started With the Model
63
+ ```python
64
+ from transformers import AutoTokenizer, AutoModelForMaskedLM
65
+
66
+ tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
67
+
68
+ model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese")
69
+
70
+ ```
71
+
72
+
73
+
74
+
75
+
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertForMaskedLM"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "directionality": "bidi",
7
+ "hidden_act": "gelu",
8
+ "hidden_dropout_prob": 0.1,
9
+ "hidden_size": 768,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 3072,
12
+ "layer_norm_eps": 1e-12,
13
+ "max_position_embeddings": 512,
14
+ "model_type": "bert",
15
+ "num_attention_heads": 12,
16
+ "num_hidden_layers": 12,
17
+ "pad_token_id": 0,
18
+ "pooler_fc_size": 768,
19
+ "pooler_num_attention_heads": 12,
20
+ "pooler_num_fc_layers": 3,
21
+ "pooler_size_per_head": 128,
22
+ "pooler_type": "first_token_transform",
23
+ "type_vocab_size": 2,
24
+ "vocab_size": 21128
25
+ }
flax_model.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76df8425215fb9ede22e0393e356f82a99d84e79f078cd141afbbf9277460c8e
3
+ size 409168515
gitattributes ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
2
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.h5 filter=lfs diff=lfs merge=lfs -text
5
+ *.tflite filter=lfs diff=lfs merge=lfs -text
6
+ *.tar.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.ot filter=lfs diff=lfs merge=lfs -text
8
+ *.onnx filter=lfs diff=lfs merge=lfs -text
9
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
10
+ model.safetensors filter=lfs diff=lfs merge=lfs -text
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3404a1ffd8da507042e8161013ba2a4fc49858b4e3f8fbf5ce5724f94883aec3
3
+ size 411553788
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a693db616eaf647ed2bfe531e1fa446637358fc108a8bf04e8d4db17e837ee9
3
+ size 411577189
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:612acd33db45677c3d6ba70615336619dc65cddf1ecf9d39a22dd1934af4aff2
3
+ size 478309336
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": false, "model_max_length": 512}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff