Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
File size: 10,098 Bytes
fad35ef |
1 |
{"forum": "r1loaec6pm", "submission_url": "https://openreview.net/forum?id=r1loaec6pm", "submission_content": {"title": "Applying Citizen Science to Gene, Drug, Disease Relationship Extraction from Biomedical Abstracts", "authors": ["Ginger Tsueng", "Max Nanis", "Jennifer T. Fouquier", "Michael Mayers", "Benjamin M. Good", "Andrew I Su"], "authorids": ["[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]"], "keywords": ["citizen science", "relationship extraction", "biomedical literature", "abstracts"], "TL;DR": "", "abstract": "Biomedical literature is growing at a rate that outpaces our ability to harness the knowledge contained therein. In order to mine valuable inferences from the large volume of literature, many researchers have turned to information extraction algorithms to harvest information in biomedical texts. Information extraction is usually accomplished via a combination of manual expert curation and computational methods. Advances in computational methods usually depends on the generation of gold standards by a limited number of expert curators. This process can be time consuming and represents an area of biomedical research that is ripe for exploration with citizen science. Citizen scientists have been previously found to be willing and capable of performing named entity recognition of disease mentions in biomedical abstracts, but it was uncertain whether or not the same could be said of relationship extraction. Relationship extraction requires training on identifying named entities as well as a deeper understanding of how different entity types can relate to one another. Here, we used the web-based application Mark2Cure (https://mark2cure.org) to demonstrate that citizen scientists can perform relationship extraction and confirm the importance of accurate named entity recognition on this task. We also discuss opportunities for future improvement of this system, as well as the potential synergies between citizen science, manual biocuration, and natural language processing. ", "archival status": "", "subject areas": [], "pdf": "", "paperhash": "tsueng|applying_citizen_science_to_gene_drug_disease_relationship_extraction_from_biomedical_abstracts", "_bibtex": "@inproceedings{\ntsueng2019applying,\ntitle={Applying Citizen Science to Gene, Drug, Disease Relationship Extraction from Biomedical Abstracts},\nauthor={Ginger Tsueng and Max Nanis and Jennifer T. Fouquier and Michael Mayers and Benjamin M. Good and Andrew I Su},\nbooktitle={Automated Knowledge Base Construction (AKBC)},\nyear={2019},\nurl={https://openreview.net/forum?id=r1loaec6pm}\n}"}, "submission_cdate": 1542459587007, "submission_tcdate": 1542459587007, "submission_tmdate": 1580993622387, "submission_ddate": null, "review_id": [], "review_url": [], "review_cdate": [], "review_tcdate": [], "review_tmdate": [], "review_readers": [], "review_writers": [], "review_reply_count": [], "review_replyto": [], "review_content": [], "comment_id": ["H1ge3Kpd7V", "r1xW156O7N", "rylEPyCOX4"], "comment_cdate": [1548437927916, 1548437976755, 1548439388242], "comment_tcdate": [1548437927916, 1548437976755, 1548439388242], "comment_tmdate": [1548985466613, 1548985456728, 1548985444707], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2019/Conference/Paper5/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper5/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper5/Authors", "AKBC.ws/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "RE- Interesting insights into crowd-provided relation annotations", "comment": "\n> Reviewer 2: Overall this paper generates a lot of interesting insights, but does not close the loop by feeding these\n> back into the Mark2Cure platform and improving annotation quality. So while there is potential for impact, a lot of\n> this potential has not yet been actualized. \n\n--We wholeheartedly agree with this statement as much of this work establishes the potential benefit of Citizen Science to biocuration. Although preliminary, we believe there is sufficient value in this work to share these current findings, and have added a sentence to the text to clarify this point.\n\n> Reviewer 2: Based on Figure 1C, the system maxes out at around 73%, even with an ensemble of workers. It is not \n> clear to me how satisfactory this level of accuracy is. For example, would this be good enough to train a relation\n> extraction model? \n\n--We believe that our current 73% maximum accuracy could be improved with further refinement of our platform based on the findings reported in this paper. In general, our intuition is that Citizen Science alone would be sufficiently accurate for assistive or statistical analyses, but not of comparable accuracy to expert curation. We are exploring more quantitative methods of assessing the accuracy needed for various biomedical applications. In addition, we have added a short explanation on the differences in our relationship extraction effort from other efforts which make drawing direct comparison difficult.\n\n> Reviewer 2: Some design choices seem odd and perhaps merit some explanation. For example, why not use\n> Mark2Cure on relations between entities of the same type? It was noted that SemMedDB has such relations, and in\n> general there seem to be many valid cases of these (drug-drug interactions, gene regulation, etc.). \n\n--In our limited experience and from what we\u2019ve observed of SemMedDB, the vast majority of relationships between entities of the same type tend to be hypernymic propositions (\u201cis a\u201d). SemMedDB is actually very good with hypernymic propositions (https://www.sciencedirect.com/science/article/pii/S1532046403001175?via%3Dihub), so we limited the annotations to be done by citizen scientists to those of different entity types. Clarification has been added to the methods section.\n\n> Reviewer 2: I don't understand the grey dots in Figure 4. What does it mean that no identifier was available? \n\n--Not every annotation from Pubtator is linked to an identifier. Annotations lacking identifiers may or may not be synonymous with other annotations lacking identifiers within an abstract so it becomes difficult to calculate the minimum distance within the text between two concepts if there are multiple annotations lacking identifiers. We thank the reviewer for pointing out this oversight and have made the necessary clarifications in the text.\n\n> Reviewer 2: Some minor formatting things: M2C was used as an acronym but not defined. Use \"(i.e., xyz)\" instead of\n> \"(ie- xyz)\", and same for \"e.g.\".\n\n--Acronym added to text and formatting fixed.\n"}, {"title": "RE- promising initial results, but missing comparison with different relation extraction methods", "comment": "\n> Reviewer 1: To make it more readable, I suggest adding an explanation of SemMedDB in the introduction, as well as\n> explaining the meaning of UMLS and the UMLS CUI in the section they are mentioned in.\n\n--We have incorporated these suggestions in the revised text.\n\n> Reviewer 1: What is missing is a more thorough positioning within the related work on relation extraction. In\n> particular, it would be useful to evaluate this application in comparison with with well-known general-purpose\n> automated relation extraction models [1], and models that are tailored for medical relation extraction [2]. Another\n> interesting comparison to make is with the different active [3] and semi-supervised learning methods [4], that also\n> have experimented with different ways of aggregating crowd data.\n\n--We thank the reviewer for these excellent references and have added them to the text as exciting avenues for exploration. Investigations with active learning are ongoing, but are not yet complete. We look forward to performing the other comparisons to existing methods in future work, and look forward to meeting potential collaborators at AKBC who might be interested in exploring this direction of work with us.\n"}, {"title": "RE- Interesting but missing solid takeaway messages", "comment": "\n> Reviewer 3: While the authors report several findings on the specific selected dataset, as a reader I struggle to easily \n> identify: \n> - the novelty: most of the findings are consistent with previous literature, both on the viability of performing medical\n> relation extraction via non-expert crowd (Aroyo and Welty, 2013) and and the fact that NER performance affects RE\n> performance\n\n---We thank the reviewer for taking the time to read our paper and offer constructive criticism. We agree that non-expert crowdsourcing (especially via paid micro-task platforms) has been demonstrated, and feel that applying citizen science is a logical next step. We hope to further address the citizen science aspects of this work in the future.\n\n> - clear takeaway guidelines and messages: the discussion is somewhat limited to specific issue of the one analyzed\n> datasets, but fail to provide reusable insights\n\n---We thank the reviewer for confirming the concerns of the other two reviewers on potential impact being unrealized, and the suggestions for improving the paper with additional comparative work. We hope that we have sufficiently addressed those concerns with the current revision and in the other reviewer responses."}], "comment_replyto": ["SkxedGg0eN", "rJloLmskGE", "ryewOgvTMV"], "comment_url": ["https://openreview.net/forum?id=r1loaec6pm¬eId=H1ge3Kpd7V", "https://openreview.net/forum?id=r1loaec6pm¬eId=r1xW156O7N", "https://openreview.net/forum?id=r1loaec6pm¬eId=rylEPyCOX4"], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": null} |