KoichiYasuoka commited on
Commit
320be9c
1 Parent(s): 263af86

model improved

Browse files
Files changed (2) hide show
  1. README.md +1 -1
  2. tokenizer_config.json +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ widget:
15
 
16
  ## Model Description
17
 
18
- This is a RoBERTa model pre-trained on 青空文庫 texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-large-japanese-aozora` for downstream tasks, such as POS-tagging, dependency-parsing, and so on.
19
 
20
  ## How to Use
21
 
 
15
 
16
  ## Model Description
17
 
18
+ This is a RoBERTa model pre-trained on 青空文庫 texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-large-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-luw-upos), dependency-parsing, and so on.
19
 
20
  ## How to Use
21
 
tokenizer_config.json CHANGED
@@ -1 +1 @@
1
- {"do_lower_case": false, "remove_space": true, "keep_accents": true, "bos_token": "[CLS]", "eos_token": "[SEP]", "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": {"content": "[MASK]", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "tokenizer_class": "RemBertTokenizerFast"}
 
1
+ {"do_lower_case": false, "remove_space": true, "keep_accents": true, "bos_token": "[CLS]", "eos_token": "[SEP]", "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": {"content": "[MASK]", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "model_max_length": 512, "tokenizer_class": "RemBertTokenizerFast"}