|
--- |
|
language: |
|
- "ja" |
|
tags: |
|
- "japanese" |
|
- "pos" |
|
- "dependency-parsing" |
|
base_model: rinna/japanese-gpt-1b |
|
datasets: |
|
- "universal_dependencies" |
|
license: "mit" |
|
pipeline_tag: "token-classification" |
|
widget: |
|
- text: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている" |
|
--- |
|
|
|
# rinna-gpt2-1b-japanese-ud-causal |
|
|
|
## Model Description |
|
|
|
This is a GPT-2 model pretrained for POS-tagging and dependency-parsing, derived from [japanese-gpt-1b](https://huggingface.co/rinna/japanese-gpt-1b) refined for [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). |
|
|
|
## How to Use |
|
|
|
``` |
|
from transformers import pipeline |
|
nlp=pipeline("universal-dependencies","KoichiYasuoka/rinna-gpt2-1b-japanese-ud-causal",trust_remote_code=True) |
|
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている")) |
|
``` |
|
|
|
|