{"forum": "BJerQWcp6Q", "submission_url": "https://openreview.net/forum?id=BJerQWcp6Q", "submission_content": {"title": "NormCo: Deep Disease Normalization for Biomedical Knowledge Base Construction", "authors": ["Dustin Wright", "Yannis Katsis", "Raghav Mehta", "Chun-Nan Hsu"], "authorids": ["dbw003@eng.ucsd.edu", "yannis.katsis@ibm.com", "r3mehta@eng.ucsd.edu", "chunnan@ucsd.edu"], "keywords": ["Entity Normalization", "Biomedical Knowledge Base Construction"], "TL;DR": "We present NormCo, a deep coherence model which considers the semantics of an entity mention, as well as the topical coherence of the mentions within a single document to perform disease entity normalization.", "abstract": "Biomedical knowledge bases are crucial in modern data-driven biomedical sciences, but auto-mated biomedical knowledge base construction remains challenging. In this paper, we consider the problem of disease entity normalization, an essential task in constructing a biomedical knowledge base. We present NormCo, a deep coherence model which considers the semantics of an entity mention, as well as the topical coherence of the mentions within a single document. NormCo mod-els entity mentions using a simple semantic model which composes phrase representations from word embeddings, and treats coherence as a disease concept co-mention sequence using an RNN rather than modeling the joint probability of all concepts in a document, which requires NP-hard inference. To overcome the issue of data sparsity, we used distantly supervised data and synthetic data generated from priors derived from the BioASQ dataset. Our experimental results show thatNormCo outperforms state-of-the-art baseline methods on two disease normalization corpora in terms of (1) prediction quality and (2) efficiency, and is at least as performant in terms of accuracy and F1 score on tagged documents.", "pdf": "/pdf/e95a420735f9fe3f50dabead18c7a7c03e347f50.pdf", "archival status": "Archival", "subject areas": ["Machine Learning", "Natural Language Processing"], "paperhash": "wright|normco_deep_disease_normalization_for_biomedical_knowledge_base_construction", "html": "https://github.com/IBM/aihn-ucsd/tree/master/NormCo-deep-disease-normalization", "_bibtex": "@inproceedings{\nwright2019normco,\ntitle={NormCo: Deep Disease Normalization for Biomedical Knowledge Base Construction},\nauthor={Dustin Wright and Yannis Katsis and Raghav Mehta and Chun-Nan Hsu},\nbooktitle={Automated Knowledge Base Construction (AKBC)},\nyear={2019},\nurl={https://openreview.net/forum?id=BJerQWcp6Q}\n}"}, "submission_cdate": 1542459676751, "submission_tcdate": 1542459676751, "submission_tmdate": 1580939650174, "submission_ddate": null, "review_id": ["Skg8RfIzfN", "BJlXzXCyfN", "H1g1fkDsWE"], "review_url": ["https://openreview.net/forum?id=BJerQWcp6Q¬eId=Skg8RfIzfN", "https://openreview.net/forum?id=BJerQWcp6Q¬eId=BJlXzXCyfN", "https://openreview.net/forum?id=BJerQWcp6Q¬eId=H1g1fkDsWE"], "review_cdate": [1546965709675, 1546801930823, 1546510087042], "review_tcdate": [1546965709675, 1546801930823, 1546510087042], "review_tmdate": [1550269654474, 1550269654257, 1550269654039], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["BJerQWcp6Q", "BJerQWcp6Q", "BJerQWcp6Q"], "review_content": [{"title": "Very interesting paper that describes a positive contribution to the state of the art in BioNLP", "review": "This paper proposes a deep-learning-based method to solve the known BioNLP task of disease normalization on the NCBI disease benchmark (where disease named entities are normalized/disambiguated against the MeSH and OMIM disease controlled vocabularies and taxonomies). The best known methods (DNorm, TaggerOne) are based on a pipeline combination of sequence models (conditional random fields) for disease recognition, and (re)ranking models for linking/normalization.\n\nThe current paper proposes instead an end-to-end entity recognition and normalization system relying on word-embeddings, siamese architecture and recursive neural networks to improve significantly (4%, 84 vs 80% F1-score, T. 3). A key feature is the use of a GRU autoencoder to encode or represent the \"context\" (related entities of a given disease within the span of a sentence), as a way of approximating or simulating collective normalization (in graph-based entity linking methods), which they term \"coherence model\". This model is combined (weighted linear combination) with a model of the entity itself.\nFinally, the complete model is trained to maximize similarity between MeSH/OMIM and this combined representation.\nThe model is enriched further with additional techniques (e.g., distant supervision). \n\nThe paper is well written, generally speaking. The evaluation is exhaustive. In addition to the NCBI corpus, the BioCreative5 CDR (chemical-disease relationship) corpus is used. Ablation tests are carried out to test for the contribution of each module to global performance. Examples are discussed.\n\nThere are a few minor issues that it would help to clarify:\n\n(1) Why GRU cells instead of LSTM cells?\n(2) Could you please explain/recall why (as implied) traditional models are NP-hard? I didn't get it. Do you refer to the theoretical complexity of Markov random fields/probabilistic graphical models? Maybe you should speak of combinatorial explosion instead and give some combinatorial figure (and link this to traditional methods). My guess is that this is important, as the gain in runtime performance (e.g., training time - F. 4) might be linked to this.\n(3) A link should be added to the GitHub repository archiving the model/code, to ensure reproducibility of results.\n(4) Could you please check for *statistical significance* for T. 3, 5, 6 and 7? At least for the full system (before ablations). You could use cross-validation. ", "rating": "9: Top 15% of accepted papers, strong accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Simple, fast method with decent results on disease normalization (linking)", "review": "Summary:\nThe authors address the problem of disease normalization (i.e. linking). They propose a neural model with submodules for mention similarity and for entity coherence. They also propose methods for generating additional training data. Overall the paper is nicely written with nice results from simple, efficient methods.\n\nPros:\n- Paper is nicely written with good coverage of related work\n- LCA analysis is a useful metric for severity of errors\n- strong results on the NCBI corpus\n- methods are significantly faster and require far fewer parameters than TaggerOne while yielding comparable results\n\nCons:\n- Results on BC5 are mixed. Why?\n- Data augmentation not applied to baselines\n- Methods are not very novel\n\nQuestions:\n- Were the AwA results applied only at test time or were the models (including baselines) re-trained using un-resolved abbreviation training data?", "rating": "7: Good paper, accept", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"title": "very competent work on an important problem", "review": "The paper presents a method for named entity disambiguation\ntailored to the important case of medical entities,\nspecifically diseases with MeSH and OMIM\nas the canonicalized entity repository.\nThe method, coined NormCo, is based on a cleverly designed\nneural network with distant supervision from MeSH tags of\nPubMed abstracts and an additional heuristic for estimating\nco-occurrence frequencies for long-tail entities.\n\nThis is very competent work on an important and challenging\nproblem. The method is presented clearly, so it would be easy\nto reproduce its findings and adopt the method for further\nresearch in this area.\nOverall a very good paper.\n\nSome minor comments:\n\n1) The paper's statement that coherence models have\nbeen introduced only recently is exaggerated. \nFor general-purpose named entity disambiguation, coherence\nhas been prominently used already by the works of\nRatinov et al. (ACL 2011), Hoffart et al. (EMNLP 2011)\nand Ferragina et al. (CIKM 2010); and the classical\nworks of Cucerzan (EMNLP 2007) and Milne/Witten (CIKM 2008)\nimplicitly included considerations on coherence as well.\nThis does not reduce the merits of the current paper,\nbut should be properly stated when discussing prior works.\n\n2) The experiments report only micro-F1. Why is macro-F1 \n(averaged over all documents) not considered at all?\nWouldn't this better reflect particularly difficult cases\nwith texts that mention only long-tail entities,\nor with unusual combinations of entities?\n", "rating": "8: Top 50% of accepted papers, clear accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["rJeogNVb44", "BJeUcfVZ4N", "BklcN7Nb4E"], "comment_cdate": [1548989427041, 1548989069897, 1548989234006], "comment_tcdate": [1548989427041, 1548989069897, 1548989234006], "comment_tmdate": [1548989427041, 1548989273158, 1548989234006], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2019/Conference/Paper39/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper39/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper39/Authors", "AKBC.ws/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Thank you for the feedback", "comment": "Thank you for the review and constructive feedback. We address the raised concerns below:\n\n(1) Thank you for the comment. In the updated submission we revised the related work section to better represent existing work on coherence, including the list of works mentioned above. \n\n(2) We agree that macro-F1 could give valuable insights into the performance of the normalization algorithms on rarely seen entities. The reason we reported micro-F1 numbers was mainly to keep in line with the recent publications in the area on the particular datasets that focus on micro-F1 performance. However, we still evaluated the macro-F1 performance of our model on NCBI and found out that it outperformed DNorm with 0.856/0.823/0.833 P/R/F1 compared to 0.828/0.809/0.819 P/R/F1 for DNorm (TaggerOne does not report macro-F1). We added a brief discussion of this to Section 6.3.1. "}, {"title": "Thank you for the feedback", "comment": "Thank you for the review and insightful questions. We address them below:\n\n(1) In general, based on anecdotal evidence it seems that the relative predictive performance of GRU and LSTM cells depends on the particular task at hand. In our case, we selected GRU cells based on experiments we performed with both LSTM and GRU cells during the model design, which showed that GRU cells led to better results. Another potential benefit of this choice is increased training performance, as GRU cells are less complex and less computation-intensive than LSTM cells. We revised Section 4.3 to explain the reasoning behind our choice of GRU cells. \n\n(2) We are referring to the complexity of modeling and performing inference from the joint probability of the entire set of tags, which is an NP-hard problem. To avoid the exponential blowup, existing techniques employ different types of approximation algorithms (e.g., Ganea and Hoffman (2017) present an N^2 approximation algorithm using a fully-connected pairwise conditional random field, which requires loopy belief propagation to train). Our proposal is to model the problem as a tag sequence using a recurrent net to avoid combinatorial explosion, though other solutions could also be proposed to reduce the complexity (i.e. model it as a tag sequence and use a conditional random field or a hidden Markov model). We cleaned up the language surrounding this point both in the abstract and in Section 4.3. \n\n(3) We intend to make the code of the best models for each dataset available upon acceptance and will be providing a link to it in the paper. \n\n(4) We attempted to obtain significance results during the author feedback period by performing 10-fold cross-validation on the NCBI disease corpus both for the best NormCo model and the best baseline model (which is TaggerOne). While we were able to obtain the evaluation metrics for NormCo, we ran into several issues while retraining TaggerOne on new splits, including (a) TaggerOne\u2019s code breaking (i.e., throwing null pointer exceptions) and (b) TaggerOne\u2019s internal F-score evaluation failing for concepts that have multiple labels (such as \u201cinherited neuromuscular disease\u201d, which is mapped to both MESH:D009468 \u201cNeuromuscular disease\u201d and MESH:D030342 \u201cGenetic diseases, inborn\u201d), which were not present in the original test set. Ultimately these issues, coupled with TaggerOne\u2019s long training times documented in Section 6.4.5, did not allow us to obtain significance results for TaggerOne. However, we were able to perform cross-validation of the best NormCo model (i.e., MC-synthetic), which resulted in an average accuracy of 0.853 with a low standard deviation of 0.013. "}, {"title": "Thank you for the feedback", "comment": "Thank you for the review! We address some of the issues raised below:\n\n(1) The reason that results on BC5 are mixed is that our model is more conservative, favoring high precision over recall (see Table 3). Since the BC5CDR dataset has a greater diversity of concepts that the NCBI dataset (1082 concepts in BC5CDR compared to 753 in NCBI), the lower recall becomes more important, leading to a slightly lower accuracy than the baseline models. However, note that even in this case the NormCo model still outperforms the baselines on the average LCA distance performance metric, which, as explained in the paper, takes into account not only the overall accuracy but also the severity of the errors. We added an explanation of this to Section 6.4.1. \n\n(2) The AwA models were applied only at test time and the models were not re-trained. We have added language to make this clearer in Section 6.3.1. The purpose of this experiment was to observe how abbreviation resolution affects the performance of the trained models. "}], "comment_replyto": ["H1g1fkDsWE", "Skg8RfIzfN", "BJlXzXCyfN"], "comment_url": ["https://openreview.net/forum?id=BJerQWcp6Q¬eId=rJeogNVb44", "https://openreview.net/forum?id=BJerQWcp6Q¬eId=BJeUcfVZ4N", "https://openreview.net/forum?id=BJerQWcp6Q¬eId=BklcN7Nb4E"], "meta_review_cdate": 1549911715049, "meta_review_tcdate": 1549911715049, "meta_review_tmdate": 1551128382722, "meta_review_ddate ": null, "meta_review_title": "Consensus accept; reviewer concerns addressed in revisions", "meta_review_metareview": "The reviewers all agree the paper is a clear accept. The paper presents an end-to-end approach to biomedical concept normalization that supplants previous state of the art pipeline systems based on more conventional bio NLP methods. Although the individual components of the solution are not novel, e.g., siamese networks, GRUs, and distant supervision, etc., they are combined together in highly appropriate ways to solve a difficult entity linking problem. The authors did a commendable job addressing the reviewers comments, questions and concerns by running experiments, providing new results, updating related work to more accurately capture the fact that other entity linking approaches also capture coherence, and addressing a few minor clarity issues.", "meta_review_readers": ["everyone"], "meta_review_writers": [], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=BJerQWcp6Q¬eId=rygsj8H1BV"], "decision": "Accept (Poster)"}