Chenxi Whitehouse commited on
Commit
857a089
1 Parent(s): eca41fc

update readme

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -41,6 +41,7 @@ The training and dev dataset can be found under [data](https://huggingface.co/ch
41
  }
42
  ]
43
  },
 
44
  }
45
  ```
46
 
@@ -98,13 +99,13 @@ python -m src.reranking.question_generation_top_sentences
98
  ```
99
 
100
  ### 4. Rerank the QA pairs
101
- Using [a pre-trained BERT model](https://huggingface.co/chenxwh/AVeriTeC/blob/main/pretrained_models/bert_dual_encoder.ckpt) we rerank the QA paris and keep top 3 QA paris as evidence. We provide the output file for this step on the dev set [here]().
102
  ```bash
103
  ```
104
 
105
 
106
  ### 5. Veracity prediction
107
- Finally, given a claim and its 3 QA pairs as evidence, we use [another pre-trained BERT model](https://huggingface.co/chenxwh/AVeriTeC/blob/main/pretrained_models/bert_veracity.ckpt) to predict the veracity label. The pre-trained model is provided . We provide the prediction file for this step on the dev set [here]().
108
  ```bash
109
  ```
110
  The results will be presented as follows:
 
41
  }
42
  ]
43
  },
44
+ ]
45
  }
46
  ```
47
 
 
99
  ```
100
 
101
  ### 4. Rerank the QA pairs
102
+ Using a pre-trained BERT model [bert_dual_encoder.ckpt](https://huggingface.co/chenxwh/AVeriTeC/blob/main/pretrained_models/bert_dual_encoder.ckpt) we rerank the QA paris and keep top 3 QA paris as evidence. We provide the output file for this step on the dev set [here]().
103
  ```bash
104
  ```
105
 
106
 
107
  ### 5. Veracity prediction
108
+ Finally, given a claim and its 3 QA pairs as evidence, we use another pre-trained BERT model [bert_veracity.ckpt](https://huggingface.co/chenxwh/AVeriTeC/blob/main/pretrained_models/bert_veracity.ckpt) to predict the veracity label. The pre-trained model is provided . We provide the prediction file for this step on the dev set [here]().
109
  ```bash
110
  ```
111
  The results will be presented as follows: