Edit model card

Description

This is the single-dataset adapter for the HotpotQA partition of the MRQA 2019 Shared Task Dataset. The adapter was created by Friedman et al. (2021) and should be used with the roberta-base encoder.

The UKP-SQuARE team created this model repository to simplify the deployment of this model on the UKP-SQuARE platform. The GitHub repository of the original authors is https://github.com/princeton-nlp/MADE

Usage

This model contains the same weights as https://huggingface.co/princeton-nlp/MADE/resolve/main/single_dataset_adapters/HotpotQA/model.pt. The only difference is that our repository follows the standard format of AdapterHub. Therefore, you could load this model as follows:

from transformers import RobertaForQuestionAnswering, RobertaTokenizerFast

model = RobertaForQuestionAnswering.from_pretrained("roberta-base")
model.load_adapter("UKP-SQuARE/HotpotQA_Adapter_RoBERTa",  source="hf")
model.set_active_adapters("HotpotQA")

tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base')

pipe = pipeline("question-answering", model=model, tokenizer=tokenizer)
pipe({"question": "What is the capital of Germany?",  "context": "The capital of Germany is Berlin."})

Note you need the adapter-transformers library https://adapterhub.ml

Evaluation

Friedman et al. report an F1 score of 78.5 on HotpotQA.

Please refer to the original publication for more information.

Citation

Single-dataset Experts for Multi-dataset Question Answering (Friedman et al., EMNLP 2021)

Downloads last month
5
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train UKP-SQuARE/HotpotQA_Adapter_RoBERTa