kiddothe2b commited on
Commit
1a9f89f
1 Parent(s): c2ba13f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -8,11 +8,11 @@ tags:
8
  datasets:
9
  - wikipedia
10
  model-index:
11
- - name: kiddothe2b/hat-mini-1024-I2
12
  results: []
13
  ---
14
 
15
- # Hierarchical Attention Transformer (HAT) / hat-mini-1024-I2
16
 
17
  ## Model description
18
 
@@ -25,7 +25,7 @@ HAT use a hierarchical attention, which is a combination of segment-wise and cro
25
  ## Intended uses & limitations
26
 
27
  You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
28
- See the [model hub](https://huggingface.co/models?other=hierarchical-transformer) to look for fine-tuned versions on a task that interests you.
29
 
30
  Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification or question answering.
31
 
@@ -35,7 +35,7 @@ You can use this model directly with a pipeline for masked language modeling:
35
 
36
  ```python
37
  from transformers import pipeline
38
- mlm_model = pipeline('fill-mask', model='kiddothe2b/hat-mini-1024-I1', trust_remote_code=True)
39
  mlm_model("Hello I'm a <mask> model.")
40
  ```
41
 
@@ -43,8 +43,8 @@ You can also fine-tun it for SequenceClassification, SequentialSentenceClassific
43
 
44
  ```python
45
  from transformers import AutoTokenizer, AutoModelforSequenceClassification
46
- tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hat-mini-1024-I1", trust_remote_code=True)
47
- doc_classifier = AutoModelforSequenceClassification(model='kiddothe2b/hat-base-4096', trust_remote_code=True)
48
  ```
49
 
50
  ## Limitations and bias
 
8
  datasets:
9
  - wikipedia
10
  model-index:
11
+ - name: kiddothe2b/hierarchical-transformer-I3-mini-1024
12
  results: []
13
  ---
14
 
15
+ # Hierarchical Attention Transformer (HAT) / hierarchical-transformer-I3-mini-1024
16
 
17
  ## Model description
18
 
 
25
  ## Intended uses & limitations
26
 
27
  You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
28
+ See the [model hub](https://huggingface.co/models?filter=hierarchical-transformer) to look for fine-tuned versions on a task that interests you.
29
 
30
  Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification or question answering.
31
 
 
35
 
36
  ```python
37
  from transformers import pipeline
38
+ mlm_model = pipeline('fill-mask', model='kiddothe2b/hierarchical-transformer-I3-mini-1024', trust_remote_code=True)
39
  mlm_model("Hello I'm a <mask> model.")
40
  ```
41
 
 
43
 
44
  ```python
45
  from transformers import AutoTokenizer, AutoModelforSequenceClassification
46
+ tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hierarchical-transformer-I3-mini-1024", trust_remote_code=True)
47
+ doc_classifier = AutoModelforSequenceClassification(model='kiddothe2b/hierarchical-transformer-I3-mini-1024', trust_remote_code=True)
48
  ```
49
 
50
  ## Limitations and bias