Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
{"forum": "S1xf-W5paX", "submission_url": "https://openreview.net/forum?id=S1xf-W5paX", "submission_content": {"title": "Joint Learning of Hierarchical Word Embeddings from a Corpus and a Taxonomy", "authors": ["Mohammed Alsuhaibani", "Takanori Maehara", "Danushka Bollegala"], "authorids": ["[email protected]", "[email protected]", "[email protected]"], "keywords": ["Hierarchical Embeddings", "Word Embeddings", "Taxonomy"], "TL;DR": "We presented a method to jointly learn a Hierarchical Word Embedding (HWE) using a corpus and a taxonomy for identifying the hypernymy relations between words.", "abstract": "Identifying the hypernym relations that hold between words is a fundamental task in NLP. Word embedding methods have recently shown some capability to encode hypernymy. However, such methods tend not to explicitly encode the hypernym hierarchy that exists between words. In this paper, we propose a method to learn a hierarchical word embedding in a speci\ufb01c order to capture the hypernymy. To learn the word embeddings, the proposed method considers not only the hypernym relations that exists between words on a taxonomy, but also their contextual information in a large text corpus. The experimental results on a supervised hypernymy detection and a newly-proposed hierarchical path completion tasks show the ability of the proposed method to encode the hierarchy. Moreover, the proposed method outperforms previously proposed methods for learning word and hypernym-speci\ufb01c word embeddings on multiple benchmarks.", "pdf": "/pdf/7015851a783625fe34355eae4a996f9298bebc4d.pdf", "archival status": "Archival", "subject areas": ["Machine Learning", "Natural Language Processing", "Knowledge Representation"], "paperhash": "alsuhaibani|joint_learning_of_hierarchical_word_embeddings_from_a_corpus_and_a_taxonomy", "_bibtex": "@inproceedings{\nalsuhaibani2019joint,\ntitle={Joint Learning of Hierarchical Word Embeddings from a Corpus and a Taxonomy},\nauthor={Mohammed Alsuhaibani and Takanori Maehara and Danushka Bollegala},\nbooktitle={Automated Knowledge Base Construction (AKBC)},\nyear={2019},\nurl={https://openreview.net/forum?id=S1xf-W5paX}\n}"}, "submission_cdate": 1542459642242, "submission_tcdate": 1542459642242, "submission_tmdate": 1580939652007, "submission_ddate": null, "review_id": ["B1eYBtGmG4", "ryg8XPy4fE", "ByxfoMkHM4"], "review_url": ["https://openreview.net/forum?id=S1xf-W5paX¬eId=B1eYBtGmG4", "https://openreview.net/forum?id=S1xf-W5paX¬eId=ryg8XPy4fE", "https://openreview.net/forum?id=S1xf-W5paX¬eId=ByxfoMkHM4"], "review_cdate": [1547016513446, 1547069213590, 1547133594116], "review_tcdate": [1547016513446, 1547069213590, 1547133594116], "review_tmdate": [1550269647959, 1550269647737, 1550269647515], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["S1xf-W5paX", "S1xf-W5paX", "S1xf-W5paX"], "review_content": [{"title": "Needs better evaluation ", "review": "This paper presents a method to jointly learn word embeddings using co-occurrence statistics as well as by incorporating hierarchical information from semantic networks like WordNet. \n\nIn terms of novelty, this work only provides a simple extension to earlier papers [1,2] by changing the objective function to instead make the word embeddings of a hypernym pair similar but with a scaling factor that depends on the distance of the words in the hierarchy.\n\nWhile the method seems to learn some amount of semantic properties, most of the baselines reported seem either outdated or ill fitted to the task and do not serve well to evaluate the value of the proposed method for the given task. \nFor example the JointRep baseline is based on a semantic similarity task which primarily learns word embeddings based on synonym relations and seems to not be an appropriate baseline to compare the current approach to.\nFurther, there are two primary methods of incorporating semantic knowledge into word embeddings - by incorporating them during the training procedure or by post processing the vectors to include this knowledge. While I understand that this method falls into the first category, it is still important and essential to compare to both types of strategies of word vector specialization. In this regard [3] has been shown to beat HyperVec and other methods on hypernym detection and directionality benchmarks and should be included in the results. It would be also interesting to see how the current approach fares on graded hypernym benchmarks such as Hyperlex. \n\nMinor comments : Section 4.2 there is a word extending out of the column boundaries. \n\n\n[1] Alsuhaibani, Mohammed, et al. \"Jointly learning word embeddings using a corpus and a knowledge base.\" PloS one (2018)\n[2] Bollegala, Danushka, et al. \"Joint Word Representation Learning Using a Corpus and a Semantic Lexicon.\" AAAI. 2016.\n[3] Vuli\u0107, Ivan, and Nikola Mrk\u0161i\u0107. \"Specialising Word Vectors for Lexical Entailment.\" NAACL-HLT 2018.", "rating": "5: Marginally below acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Well motivated approach but some concerns", "review": "This paper proposed a joint learning method of hypernym from both raw text and supervised taxonomy data. The innovation is that the model is not only modeling hypernym pairs but also the whole taxonomy. The experiments demonstrate better or similar performance on hypernym pair detection task and much better performance on a new task called \"hierarchical path completion\". The method is good motivated and intuitive. Lots of analysis on the results are done which I liked a lot. But I have some questions for the authors.\n\n1) One major question I have is for the taxonomy evaluation part, I think there are works trying to do taxonomy evaluation by using node-level and edge-level evaluation. 'A Short Survey on Taxonomy Learning from Text Corpora:\nIssues, Resources, and Recent Advances' from NAACL 2017 did a nice summarization for this. Is there any reason why this evaluation is not applicable here?\n\n2) At the end of section 4.2, the author mentioned Retrofit, JointReps and HyperVec are using the original author prepared wordnet data. Then the supervised training data is different for different methods? Is there a more controlled experiment where all experiments are using the same training data?\n\n3) In section 4.4, there are three prediction methods are introduced including ADD, SUM, and DH. The score is calculated using cosine similarity. But the loss function used in the model is by minimizing the L2 distance between word embeddings? Is there any reason why not use L2 but cosine similarity in this setting? Also, I'm assuming SUM and DH are using cosine similarity as well? It might be useful to add that bit of information.\n\n4) The motivation for this paper is to using taxonomy instead of just hypernym pairs? Another line of research trying to encode the taxonomy structure into the geometry space such that the taxonomy will be automatically captured due to the self-organized geometry space. Some papers including but not restricted 'Order-Embeddings of Images and Language', \n'Probabilistic Embedding of Knowledge Graphs with Box Lattice Measures'. Probably this line of work is not directly comparable, but it might be useful to add to the related work session.\n\nA few minor points: \n1) In equation four of section 3, t_max appears for the first time. This equation maybe part of the GLOVE objective, but a one-sentence explanation of t_max might be needed here.\n2) at the end of section 3, the calculation of gradients for different parameters are given, but the optimization is actually performed by AdaGrad. Maybe it would be good to move these equations to the appendix.\n3) In section 4.1 experiment set up, the wordnet training data is generated by performing transitive closure I assume? How does the wordnet synsets get mapped to its surface form in order to do further training and evaluation?\n", "rating": "5: Marginally below acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Recent relevant work not adequately discussed or compared. ", "review": "Paper summary: This paper presents a method of learning word embeddings for the purpose of representing hypernym relations. The learning objective is the sum of (a) a measure of the \u201cdistributional inclusion\u201d difference vector magnitude and (b) the GloVE objective. Experiments on four benchmark datasets are mostly (but not entirely positive) versus some other methods.\n\nThe introduction emphasizes the need for a representation that \"able to encode not only the direct hypernymy relations between the hypernym and hyponym words, but also the indirect and the full hierarchical hypernym path.\u201d There has been significant interest in recent work on representations aiming for exactly this goal, including Poincare Embeddings [Nikel and Kiela], Order Embeddings [Vendrov et al], Probabilistic Order Embeddings [Lai and Hockenmaier], Box embeddings [Vilnis et al]. It seems that there should be empirical comparisons to these methods.\n\nI found the order of presentation awkward, and sometimes hard to follow. For example, I would have liked to see a clear explanation of test-time inference before the learning objective was presented, and I\u2019m still left wondering why there is not a closer correspondence between the multiple inference methods described (in Table 3) and the learning objective.\n\nI would also have liked to see a clear motivation for why the GloVE embedding is compatible with and beneficial for the hypernym task. \u201cRelatedness\u201d is different than \u201chypernymy.\u201d", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["H1leBb_WEV", "H1g6G5sxN4", "Sklm9dog44", "ByeAXLjlEV"], "comment_cdate": [1549005112455, 1548954133305, 1548953738863, 1548953125709], "comment_tcdate": [1549005112455, 1548954133305, 1548953738863, 1548953125709], "comment_tmdate": [1549005112455, 1548954133305, 1548953738863, 1548953125709], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2019/Conference/Paper26/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper26/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper26/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper26/Authors", "AKBC.ws/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Summary of changes in the new version of the paper.", "comment": "We thank all the reviewers for their valuable comments and constructive suggestions. The main concerns highlighted from the reviewers are about the evaluation part, mainly about missing an empirical comparison with some recent relevant work. We have now updated the original paper with:\n(1) More related work and discussion about their relevance to the proposed method\n(2) An empirical comparison with the suggested relevant work in all the evaluation tasks\n(3) A new evaluation task (section 4.4) on the graded lexical entailment."}, {"title": "More evaluation task has been added.", "comment": "\n-->Q1: While the method seems to learn some amount of semantic properties, most of the baselines reported seem either outdated or ill fitted to the task and do not serve well to evaluate the value of the proposed method for the given task.\n\nAns: We have now updated the paper with two more recent relative work, Poincare[1] and LEAR[2], and empirically compare the proposed method against them in all the three evaluation tasks.\n\n[1] Poincare Embeddings [Nikel and Kiela] - NIPS 2017\n[2] Specialising Word Vectors for Lexical Entailment [Vulic and Mrksic] - NAACL-HLT 2018\n\n-->Q2: For example the JointRep baseline is based on a semantic similarity task which primarily learns word embeddings based on synonym relations and seems to not be an appropriate baseline to compare the current approach to.\n\nAns: The JointReps method used different semantic relations (synonyms, hypernyms, hyponyms\u2026 etc), however, here we used the hypernym relation when we train the JointReps to compare the proposed method with it in all the evaluation tasks.\n\n-->Q3: Further, there are two primary methods of incorporating semantic knowledge into word embeddings - by incorporating them during the training procedure or by post processing the vectors to include this knowledge. While I understand that this method falls into the first category, it is still important and essential to compare to both types of strategies of word vector specialization.\n\nAns: The evaluation proposed in the paper does, in fact, compare against both type of categories. JointReps and HyperVec fall into the first category, whereas the Retrofit method and the newly added (in the updated version of the paper) LEAR method falls into the second. In all of these methods, we used the hypernym relations to incorporate the semantic knowledge into the learnt embeddings.\n\n-->Q4: In this regard [3] has been shown to beat HyperVec and other methods on hypernym detection and directionality benchmarks and should be included in the results. \n\nAns: Thank you for the suggestion. We have now added LEAR to the updated version of the paper and empirically compare the proposed method against in all the three evaluation tasks.\nThe proposed method reports better or comparable results to LEAR in two of the main evaluation tasks (hypernym detection and hierarchical completion).\n\n-->Q5: It would be also interesting to see how the current approach fares on graded hypernym benchmarks such as Hyperlex.\n\nAns: Thank you for the suggestion. We have added a new sub-section (section 4.4) in the updated version of the paper with a new evaluation task on the graded lexical entailment prediction using HyperLex.\n\n-->Q6: Section 4.2 there is a word extending out of the column boundaries. \n\nAns: Thank you. This has been modified in the updated version of the paper."}, {"title": "Concerns clarification", "comment": "-->Q1: One major question I have is for the taxonomy evaluation part, I think there are works trying to do taxonomy evaluation by using node-level and edge-level evaluation. 'A Short Survey on Taxonomy Learning from Text Corpora:\nIssues, Resources, and Recent Advances' from NAACL 2017 did a nice summarization for this. Is there any reason why this evaluation is not applicable here?\n\nAns: The above-mentioned paper is mainly about the recent work on taxonomy construction from free texts, which is different from what we are proposing in this paper. Our goal is not to create taxonomies but to learn word embeddings that preserve taxonomic information as vector representations. As we do not create taxonomies, we cannot evaluate the word embeddings using taxonomy evaluation methods.\n\n-->Q2: At the end of section 4.2, the author mentioned Retrofit, JointReps and HyperVec are using the original author prepared wordnet data. Then the supervised training data is different for different methods? Is there a more controlled experiment where all experiments are using the same training data?\n\nAns: The models Retrofit, JointReps, and HyperVec (and the newly added two recent relevant work (Poincare and LEAR)) work with pairwise relation data. However, the proposed HWE works on a full hierarchical hypernym path. Therefore, the models require slightly different data.\n\n-->Q3: In section 4.4, there are three prediction methods are introduced including ADD, SUM, and DH. The score is calculated using cosine similarity. But the loss function used in the model is by minimizing the L2 distance between word embeddings? Is there any reason why not use L2 but cosine similarity in this setting?\n\nAns: We have empirically tested both the L2 and cosine and found that the cosine to work better in the given experiment.\n\n-->Q4: Also, I'm assuming SUM and DH are using cosine similarity as well? It might be useful to add that bit of information. \n\nAns: Yes. This has been added in the updated version of the paper.\n\n-->Q5: The motivation for this paper is to using taxonomy instead of just hypernym pairs?\n\nAns: Yes\n\n-->Q6: Another line of research trying to encode the taxonomy structure into the geometry space such that the taxonomy will be automatically captured due to the self-organized geometry space. Some papers including but not restricted 'Order-Embeddings of Images and Language', 'Probabilistic Embedding of Knowledge Graphs with Box Lattice Measures'. Probably this line of work is not directly comparable, but it might be useful to add to the related work session.\n\nAns: Thank you for the suggestion. We have now updated the paper with two more related works, Poincare[1] and LEAR[2] and empirically compare the proposed method against them.\nPlease note that Probabilistic and Box Embeddings are relatively less related to the proposed method as they working on phrase embeddings and used pre-trained word embeddings to feed an LSTM for learning phrase embeddings.\n\n[1] Poincare Embeddings [Nikel and Kiela] - NIPS 2017\n[2] Specialising Word Vectors for Lexical Entailment [Vulic and Mrksic] - NAACL-HLT 2018\n\n-->Q7: In equation four of section 3, t_max appears for the first time. This equation maybe part of the GLOVE objective, but a one-sentence explanation of t_max might be needed here.\n\nAns: Yes, it is the weighting function of GloVe so that it becomes relatively small for words of large frequency and set to 100 as stated in section (4.1).\n\n-->Q8: at the end of section 3, the calculation of gradients for different parameters are given, but the optimization is actually performed by AdaGrad. Maybe it would be good to move these equations to the appendix.\n\nAns: The gradients equations have been moved to the appendix in the updated version of the paper.\n\n-->Q9: In section 4.1 experiment set up, the wordnet training data is generated by performing transitive closure I assume? How does the wordnet synsets get mapped to its surface form in order to do further training and evaluation?\n\nAns: We lemmatise the corpus and use the form given in the WordNet as the surface form."}, {"title": "More recent relevant work have been added", "comment": "-->Q1: There has been significant interest in recent work on representations aiming for exactly this goal, including:\nPoincare Embeddings [Nikel and Kiela], \nOrder Embeddings [Vendrov et al], \nProbabilistic Order Embeddings [Lai and Hockenmaier], \nBox embeddings [Vilnis et al]. \nIt seems that there should be empirical comparisons to these methods. \n\nAns: Thank you for the suggestion. Poincare seems to be an excellent fit to be empirically compared against the proposed method, as they both share a similar spirit to explicitly learn hierarchical word embeddings rather than hypernymy-specific embeddings\nWe have now updated the paper with two more related works, Poincare[1] and LEAR[2] and empirically compare the proposed method against them.\nWe have also updated the paper with a new evaluation task (section 4.4) to test the proposed method on a graded lexical entailment as suggested by reviewer3.\nThe proposed method reports an improvement over most of the prior works, including Poincare, in the three tasks (hypernym detection, graded lexical entailment, and the hierarchical path completion), except for the LEAR in two datasets.\nMore interestingly, Poincare seems to perform well in the proposed hierarchical path completion task in contrast to the other methods apart from HWE. The fact that Poincare embeddings, a hierarchical word embeddings learn method, reports good performance on this hierarchical path completion task, suggests that this is an appropriate task for evaluating hierarchies and embeddings. \n\nPlease note that Probabilistic and Box Embeddings are relatively less related to the proposed method as they work on phrase embeddings and use pre-trained word embeddings to feed an LSTM for learning phrase embeddings.\n\n[1] Poincare Embeddings [Nikel and Kiela] - NIPS 2017\n[2] Specialising Word Vectors for Lexical Entailment [Vulic and Mrksic] - NAACL-HLT 2018\n\n\n-->Q2: I found the order of presentation awkward, and sometimes hard to follow. For example, I would have liked to see a clear explanation of test-time inference before the learning objective was presented, and I\u2019m still left wondering why there is not a closer correspondence between the multiple inference methods described (in Table 3) and the learning objective.\n\nAns: We use the hierarchical word embeddings produced by the proposed method (HWE) in three tasks: hypernym detection (section 4.3), graded lexical entailment (section 4.4) and the hierarchical path completion (section 4.5).\nEach task has different inference methods, and that is the reason why we describe the inference methods under each section separately and not in the method for learning hierarchical word embeddings section.\nFor example, for the first task (section 4.3) we used the concatenation approach as stated in the section.\nSimilarly, for the graded lexical entailment task, we used the inference method described in section 4.4 (Eq. (6)).\nThe inference methods described in Table 3 are specific to the hierarchical path completion task.\nAmong the different inference methods compared in Table 3, ADD corresponds closely to the training objective used by the HWE learning method we propose (see Eq. (2)). This might explain why ADD turns out to be the best inference methods in Table 3.\n\n\n-->Q3: I would also have liked to see a clear motivation for why the GloVE embedding is compatible with and beneficial for the hypernym task. \u201cRelatedness\u201d is different than \u201chypernymy.\u201d\n\nAns: All the datasets in the hypernym identification task are pairwise relation data, and it could be the case that it easier for such distributional methods to pick the hypernymy, where hypernymy tend to occur in similar context."}], "comment_replyto": ["S1xf-W5paX", "B1eYBtGmG4", "ryg8XPy4fE", "ByxfoMkHM4"], "comment_url": ["https://openreview.net/forum?id=S1xf-W5paX¬eId=H1leBb_WEV", "https://openreview.net/forum?id=S1xf-W5paX¬eId=H1g6G5sxN4", "https://openreview.net/forum?id=S1xf-W5paX¬eId=Sklm9dog44", "https://openreview.net/forum?id=S1xf-W5paX¬eId=ByeAXLjlEV"], "meta_review_cdate": 1549796187292, "meta_review_tcdate": 1549796187292, "meta_review_tmdate": 1551128373490, "meta_review_ddate ": null, "meta_review_title": "Paper with initial unawareness of important related work but convincing revision", "meta_review_metareview": "All reviewers voiced concerns regarding the comparison to recent related work. However, in my view, the authors addressed these concerns well in their revision, comparing directly against Poincar\u00e9 embeddings and LEAR. While the comparison reveals mixed results with respect to LEAR, I believe this work is well executed and of interest to the AKBC community. ", "meta_review_readers": ["everyone"], "meta_review_writers": [], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=S1xf-W5paX¬eId=BkeQD7Kp4E"], "decision": "Accept (Poster)"} |