Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
File size: 32,832 Bytes
fad35ef |
1 |
{"forum": "YN8fkglNA", "submission_url": "https://openreview.net/forum?id=YN8fkglNA", "submission_content": {"keywords": ["knowledge base construction", "unsupervised", "matrix factorization"], "authorids": ["AKBC.ws/2020/Conference/Paper54/Authors"], "title": "Sampo: Unsupervised Knowledge Base Construction for Opinions and Implications", "authors": ["Anonymous"], "pdf": "/pdf/b6d9af0252dae3b8d808542bc0d16f3a37b0c1c8.pdf", "subject_areas": ["Information Extraction", "Relational AI"], "abstract": "Knowledge bases (KBs) have long been the backbone of many real-world applications and services. There are many KB construction (KBC) methods that can extract factual information, where relationships between entities are explicitly stated in text. However, they cannot model implications between opinions which are abundant in user-generated text such as reviews and often have to be mined. Our goal is to develop a technique to build KBs that can capture both opinions and their implications. Since it can be expensive to obtain training data to learn to extract implications for each new domain of reviews, we propose an unsupervised KBC system, Sampo, Specifically, Sampo is tailored to build KBs for domains where many reviews on the same domain are available. We generate KBs for 20 different domains using Sampo and manually evaluate KBs for 6 domains. Our experiments show that KBs generated using Sampo capture information otherwise missed by other KBC methods. Specifically, we show that our KBs can provide additional training data to fine-tune language models that are used for downstream tasks such as review comprehension.", "paperhash": "anonymous|sampo_unsupervised_knowledge_base_construction_for_opinions_and_implications"}, "submission_cdate": 1581705806186, "submission_tcdate": 1581705806186, "submission_tmdate": 1587175158177, "submission_ddate": null, "review_id": ["Wy4oFhnST2c", "eYummtXaII", "aPuaJJ5t6OJ"], "review_url": ["https://openreview.net/forum?id=YN8fkglNA¬eId=Wy4oFhnST2c", "https://openreview.net/forum?id=YN8fkglNA¬eId=eYummtXaII", "https://openreview.net/forum?id=YN8fkglNA¬eId=aPuaJJ5t6OJ"], "review_cdate": [1584996284567, 1585320890983, 1585363939211], "review_tcdate": [1584996284567, 1585320890983, 1585363939211], "review_tmdate": [1587177909655, 1585695518527, 1585695518248], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2020/Conference/Paper54/AnonReviewer2"], ["AKBC.ws/2020/Conference/Paper54/AnonReviewer1"], ["AKBC.ws/2020/Conference/Paper54/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["YN8fkglNA", "YN8fkglNA", "YN8fkglNA"], "review_content": [{"title": "Clearly written and motivated paper, but need more analysis and explanation for experiments", "review": "This paper addresses the problem of building KBs from datasets like product reviews, that identifies implications between opinions from reviews. The authors claim that opinions and their implications often do not co-occur within the same review and the annotation for such implication relation is expensive, for which matrix factorization techniques turn to be a promising approach. \n\nStrength:\n1. This paper is well written and most contents are easy to follow. The motivation is clear and the method description is well presented with reasoning behind each component choice.\n2. The authors released the results of generated KB from their model on 20 domains, which could be useful for the research community.\n3. Their experiment results seem to be in favor of their proposed method, although it's a very simple method.\n\nWeakness and Questions:\n1. Intuitively, the \"implication\" relationship proposed in this paper should be directed. A implies B doesn't mean B implies A. However, the method introduced in the paper decides such implication relationship based on cosine similarity, which is symmetrical.\n2. Please provide more details about how you obtain representations from universal schema, as this seems to be the major season why you have the huge performance gap between your model and universal schema. \n3. For using pre-trained LMs, which BERT model did you use? \n4. When applying pre-trained models, why do you mask out the aspect token when predicting the mod token?\n5. Do you restrict your modifier and aspect tokens to be unigrams? If yes, you should clarify this in the paper, as this is inconsistent with motivation examples you provided. If not, this is not comparable between pre-trained models and other models. And this is very important as the difference between your model and pre-trained LM doesn't seem to be big.\n6. In section 4.3, if you are feeding LM supervision from generated KB, this is technically not \"supervised\" as there are quite a lot of noise from the generated KB. So it's hard to tell whether the reason why such \"supervised\" model cannot achieve good results is due to the limitation of LM or the noise from training. Thus, your conclusion in 4.3 needs more convincing evidence. \n7. Again in section 4.3, you say \"It takes as input a pair of opinions, $o_h$ and $o_t$ and predicts a label 1 if $o_h$ implies $o_t$ or 0 otherwise.\" It seems like you don't feed the LM any context of such extracted opinions. This is not a typical setting for LM, although I understand it's hard to incorporate such \"context\" in your setting. \n8. There is little analysis of comparison across domains.\n9. More prediction examples and error analysis are needed\n\n\nSome minor points in writing:\n1. I found it hard to follow in the second paragraph of second page, that starts with \"There are a number of challenges in ...\". I didn't understand until I finished reading some later content.\n\n2. The authors may want to provide more details about MINI-MINER. For example, whether you use any NLP tools to extract certain linguistic features? And did you use this for baselines as well?\n\n3. Although the authors give a brief description about \"factorization techniques over binary matrices\" and why they think this will cause worse results, I was expecting some ablation analysis on this, while they didn't provide any.\n\n4. In section 3.3, I think the last $o_t$ in the first paragraph should be $o_h$ instead.\n\n5. How do you calculate the probability in PMI? Do you use frequency based counts? If yes, clarify.\n\n6. I think the authors need better experiment figures to be put in the paper. The current ones seem very rough and some are with bad resolutions. \n\n\n\n", "rating": "6: Marginally above acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Interesting kind of KB but missing discussion of assumptions", "review": "The authors propose a method to automatically build a Knowledge Base of opinions and implications between them. The KB is realized as a directed graph where nodes correspond to opinions in a canonical form (modifier, aspect), and edges indicate implications. It is built by factorizing a matrix of item-opinion frequencies, and finding the top k neighbors of an opinion.\n\nThe idea of creating a KB of opinions is relevant for the field of KB construction, and it opens the door to further research where these graphs can be used.\n\n\nStrengths\n- The proposed method allows to obtain a KB of opinions from raw text as input. For cases where the performance of an opinion extractor can be harmed due to a change in the domain, the authors propose a set of rules.\n- The authors propose a method for improving robustness, based on optimization under noisy data.\n- The experiments are thorough, showing results with multiple datasets from different domains, and relevant baselines.\n- Overall, the paper is clear and well structured.\n\n\nWeaknesses\n\nThe paper starts with the promise of modeling implications between opinions, but later I realize that this modeling is limited by assumptions and design, that are not addressed. \n\nThe first is: implications between opinions are, as across all NLP, very sensitive to relatively small lexical variations in text. Take for example the opinions O1 = \"The movie has complex characters\" and O2 = \"There's good writing going on\". The proposed pipeline would then give as candidate implication O1 -> O2. What if O1 changes to \"The movie has unnecessarily complex characters\"? How does the opinion extractor behave in these cases? How does this affect performance? \n\nThe second, and probably more crucial, is: the directions of the implications are not really modeled by the proposed method. In fact, following the example above, the proposed method might propose both O1 -> O2 and O2 -> O1, because they are close in the space of opinions, and the graph built via k nearest neighbors is undirected.\n\n\nI wonder how unsupervised the method really is, in contrast with what the authors claim. It relies heavily on an opinion extractor, which rather means that when going from raw text to opinions, there is not really anything to learn, but any potential errors from this part of the pipeline are propagated to the factorization step.\n\nA core component is the minimization of the reconstruction error of the item/opinion matrix, but the way this minimization is performed in practice is not specified.\n\nThese weaknesses are more related to how the work is conveyed, and I think the paper would benefit by discussing them. The actual impact of the proposed method and its relevance for the conference are still valuable.\n", "rating": "6: Marginally above acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Happy to see the unsupervised method, would like some clearer motivation", "review": "Summary of contributions:\n\nThis paper presents an entirely unsupervised method for taking as input a corpus of reviews about a set of items and building as output a directed graph where each node represents an opinion phrase (extracted from text) and edges indicate implication relationships.\n\nAfter extracting opinion phrases from the review corpus (relying largely on prior work) the method constructs on a matrix where rows represent items and columns indicate opinions, with the cells containing counts of how often that opinion was expressed for the item. Matrix factorization is applied to produce embeddings for each opinion, with edges then created between k-nearest neighbors in the embedding space.\n\nThe contributions can be enumerated as:\n- proposal of an entirely unsupervised system for construction of this opinion graph\n- use of matrix factorization to determine similarity between opinions\n- application of the method to data in several domains and analysis of results\n- release of data and code\n\n\nStrengths:\n\nIt is a fully unsupervised method that takes a corpus of text and produces a potentially useful piece of knowledge.\n\nThe matrix factorization approach is a simple but elegant way of discovering similarity between extracted phrases expressed in text. \n\nEvaluations are conducted in several very different domains, from movies to electronics to restaurants, showing that the method is domain independent.\n\nThe comparisons with BERT and discussion of potential complement between the proposed approach and a language model approach is nice.\n\nThe paper is generally well written and easy to follow.\n\n\nWeaknesses:\n\nMy primary complaint is that I would like to see more motivation of the utility of the constructed knowledge base. (I also somewhat disagree with the use of \"knowledge base\" to describe the outputed graph, as it contains a single relation and entity type, and that relation has ambiguous semantics.) Some specific issues that should be addressed:\n\n- What is the use of such a graph? How would one use it in a downstream task? \n- The abstract asserts that you show that your model can benefit downstream tasks such as review comprehension, but you do not directly show this. If you make this claim, you should provide an experiment showing improvement on this task by incorporating your graph.\n- Why do we assume that proximity between the opinion embeddings indicates implication rather than just similarity? I assume that there will be some cases where two opinions in your graph imply each other, while in other cases there may be only a single edge (if one happens to have more nearby neighbors). Why should we consider these cases to be different? \n- What exactly does implication mean in terms of opinions? I'd like to see a clearer definition here.\n\nWith regard to evaluation, did you measure the inter-annotator agreement between crowdworkers? The task of assessing \"implication\" between opinions seems quite muddy to me. In fact, I personally might disagree with your flagship example of \"complex characters\" implying \"good writing\". I don't think that is always true, and if I was given that example as a crowdworker I may have marked it as incorrect. Maybe my opinion would be in the minority, but this is why I would like to see a better and clearer motivation for the task.\n\nWith regard to your baselines, I was a little unclear on the application of universal schema. Universal schema uses a matrix where rows are entity pairs and columns are relations. You mention that you map all item-opinion pairs to a single \"has-a\" relation. Does that mean the matrix has only a single column?\n\nThe font size of Figures 2 and 3 is too small.\n\nMinor point: There is a typo in the first full sentence on page 2, where \"second review\" should read \"first review\".\n\nOverall Conclusion:\n\nThis paper presents a novel unsupervised method for extracting opinion phrases and implications between them from a corpus of reviews. The paper would benefit from better motivation for its problem and solution. That said, its unsupervised approach for knowledge extraction offers a useful method for capturing semantic relationships between textual phrases, and the experiments show strong performance against baselines.", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["e5Tynnr5dnR", "ei5C7B_fwsE", "HLDiRAIiYZ6", "ZaiAlP9pbm", "KSqB9hZ6ODB", "uag9wZDNE6Y", "WRxRCYAkVR", "IstjG-oXFo6"], "comment_cdate": [1587178069590, 1587142576346, 1587131691492, 1586305226343, 1586304196086, 1586304651211, 1586304546943, 1586304407417], "comment_tcdate": [1587178069590, 1587142576346, 1587131691492, 1586305226343, 1586304196086, 1586304651211, 1586304546943, 1586304407417], "comment_tmdate": [1587178069590, 1587142576346, 1587131691492, 1586305226343, 1586304776906, 1586304651211, 1586304546943, 1586304492105], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["AKBC.ws/2020/Conference/Paper54/Authors", "AKBC.ws/2020/Conference/Paper54/Reviewers/Submitted", "AKBC.ws/2020/Conference/Paper54/Area_Chairs", "AKBC.ws/2020/Conference/Program_Chairs", "everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2020/Conference/Paper54/AnonReviewer2", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper54/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper54/AnonReviewer1", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper54/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper54/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper54/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper54/Authors", "AKBC.ws/2020/Conference"], ["AKBC.ws/2020/Conference/Paper54/Authors", "AKBC.ws/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Thanks for adding clarifications", "comment": "Thanks for providing clarifications to some major points like implication directions, etc. I think the paper is in a better shape now."}, {"title": "response to clarification", "comment": "We meant we used unsupervised extractors in our system to demonstrate that it can find interesting implications even by relying on simple rule-based extractors."}, {"title": "thanks for clarification", "comment": "Thanks for your clarifications, in particular on how you determine implication. \n\nQuick question: what do you mean by \"pushed for relying on unsupervised extractors\"?"}, {"title": "Response to Reviewer #3 [1/1]", "comment": "R3W1: Q: How implications (which are directed) are derived from cosine similarities which are symmetric?\nA: We have updated Section 3.3 to reflect this discussion better. The proximity between the opinions embeddings identifies similarity. However, the k-nearest graph constructed using these similarities is a directed graph. The directions of edges enable Sampo to discover implications. For instance, we see that \u201cbad breakfast\u201d is among the top 5 neighbors of \u201cburnt coffee taste\u201d, but \u201cburnt coffee taste\u201d does not appear in the top neighbors of \u201cbad breakfast\u201d and more generic opinions such as \u201climited options\u201d, \u201cpoor service\u201d are closer to \u201cbad breakfast\u201d. This helps us identify that there should be a directed edge from \u201cburnt coffee taste\u201d to \u201cbad breakfast\u201d and not vice versa. On the other hand, \u201cpoor breakfast\u201d and \u201cbad breakfast\u201d appear in the top neighbors of each other and we can conclude that the implication edge goes both ways which further implies the two opinions are equivalent.\n\nR3W2: Q: How representations are obtained from Universal Schema?\nA: We considered the items being reviewed as entities and considered each opinion O as a unary relation which we called \u201chas-a-O\u201d. For instance, we consider \u201cCasablanca\u201d has-a-(romantic,theme). Thus, our matrix has as many relations as our extracted opinions. Universal schema computes a representation for each relation and we use these representations to measure the similarity between different opinions. We have updated our description of our setup in Section 4.1.\n\nR3W3: Q: Which BERT model is used?\nA: We used the uncased large BERT mode which has 24-layers, hidden state of size 1024, 16 self-attention heads, and 340M parameters. We have included these details in the new uploaded revision.\n\nR3W4: Q: Why do you mask out the aspect token when predicting the mod token?\nA: Our goal is to identify the extent to which pre-trained models (e.g., BERT) can identify implications between opinions. As is mentioned in the paper, the baseline computes the joint probability of a MOD- ASP pair to be able to rank the likelihood of an implication relation. We compute this joint probability as\nP(MOD-ASP) = P(MOD) * P(ASP|MOD) which is why in the first step (which corresponds to the first term), the ASP is masked.\n\nR3W4: Q: Are modifiers and aspects restricted to be unigrams (in the LM baseline)?\nA: We restrict the modifiers and aspects to be unigrams only for the pre-trained LM baseline, following the same approach as the LAMA knowledge probing approach by Petroni et al. 2019. To be fair in our evaluation, we measure the precision and recall of this baseline on the subset of opinions which consist of unigram modifiers and aspects. We will clarify this further in the paper.\n\nR3W5: Q: Is the \u201csupervised\u201d method valid if you use the generated KB for supervision?\nA: We should clarify that to train the supervised technique, we presented our KB to crowdworkers to filter the incorrect implications. We used the labeled data to train the supervised methods. We have adjusted the discussion in Section 4.3 to reflect this.\n\nR3W6: Q: Did you consider providing the context of opinions to the LM?\nA: This is a valid point as LMs might be able to capture more given the \u201ccontext\u201d. In our setting, an opinion might appear in multiple contexts, so besides setting up the architecture to incorporate the context, we should decide which contexts should be fed to the system. This is an interesting direction to explore and thanks for pointing that out.\n\nR3W7: Q: More analysis across domains?\nA: We agree that comparing the performance across domains would be an interesting analysis to conduct. Our focus in this paper was on evaluating the quality of the KB and conducting experiments to find the best technique for mining implication relations from reviews.\n\nR3W8: Q: More prediction examples and error analysis?\nA: We have included more examples and point to the complete set of predictions in the repository.\n\nR3W9: Q: Clarify the second paragraph of the second page.\nA: Thank you for your feedback. We have rephrased the discussion.\n\nR3W10: Q: Provide more details on Mini-Miner.\nA: We have included more details on Mini-Miner in the text. We have used package spacy to obtain the dependency parse trees of review sentences and used hand-written patterns to find opinions.\n\nR3W11: Q: Explore factorization techniques over binary matrices.\nA: In our experiments, Universal Schema serves as the binary-version of our technique. We have clarified this in the experimental setup.\n\nR3W12: Q: Typo in section 3.3\nA: Thank you for pointing out the typo. We have fixed it in the revision.\n\nR3W13: Q: How PMI is calculated?\nA: Yes, we calculate the PMI based on frequency of opinions and co-occurrence counts. We follow Church et al. 1989 (https://www.aclweb.org/anthology/P89-1010/) to compute PMI. We have updated Section 4.1 to describe this more clearly.\n\nR3W14: Q: Improve the figures.\nWe have improved the figures in the paper."}, {"title": "Thank you to our reviewers!", "comment": "We thank the reviewers for their encouraging and detailed feedback. We have uploaded the revision to include more details and clarifications."}, {"title": "Response to Reviewer #2 [1/1]", "comment": "Please find our response to your questions/comments below. \n\nR2W1: Q: Is the opinion extractor nuanced?\nA: The opinion extractors can identify and incorporate negations, adverbial modifiers in the \u201cmodifier\u201d for the target aspect. In the second example, the opinion extractor is expected to extract (unnecessarily complex, characters) as the opinion instead of (complex, characters). Both extractors we have used in our experiments capture these nuances and we assume other extractors useful as long as they can capture these subtleties. \n\nR2W2: Q: Does proximity between the opinion embeddings indicate implication?\nA: The proximity between the opinions embeddings, as the reviewer suggested, identifies similarity. However, once these similarities are computed, we proceed to create a K-nearest neighbor graph. A K-nearest neighbor graph is a directed graph and the direction of edges are what enable Sampo to discover implications. For instance, we see that \u201cbad breakfast\u201d is among the top 5 neighbors of \u201cburnt coffee taste\u201d, but \u201cburnt coffee taste\u201d does not appear in the top 5 neighbors of \u201cbad breakfast\u201d and more generic opinions such as \u201climited options\u201d, \u201cpoor service\u201d are closer to \u201cbad breakfast\u201d. This helps us identify that there should be a directed edge from \u201cburnt coffee taste\u201d to \u201cbad breakfast\u201d and not vice versa. On the other hand, \u201cpoor breakfast\u201d and \u201cbad breakfast\u201d appear in the top 5 nearest neighbors of each other and we can conclude that the implication edge goes both ways which further implies the two opinions are equivalent. We will convey this more clearly in Section 3.3 in the uploaded revision.\n\nR2W3: Q: How unsupervised the method really is?\nA: Our problem formulation assumes access to an opinion extractor. Clearly, opinion extractors that are trained using supervised data tend to perform better. However, we pushed for relying on unsupervised (rule-based) extractors to ensure that the system can produce interesting results with no supervision. We experimented with rule-based extractors as well as supervised extractors. We found that while accuracy of the extractor can affect the quality of implications we find, it is possible to rely on highly-accurate pattern-based extractors (that might have limited recall) for domains where a labeled data for training an opinion extractor (that yields higher recall) is not available.\n\nR2W4: Q: What technique is used to minimize reconstruction error?\nA: As mentioned in the paper, we are using the package Tensorly to factorize the matrices. To factorize the matrix, Alternating Least Squares (ALS) technique to minimize the reconstruction error.\n"}, {"title": "Response to Reviewer #1 [2/2]", "comment": "Please find our response to your questions/comments below. \n\nR1W7: Q: Details of the universal Schema experimental set-up?\nA: We basically considered the items being reviewed as entities and considered each opinion O as a unary relation which we called \u201chas-a-O\u201d. For instance, we consider \u201cCasablanca\u201d has-a-(romantic,theme). Thus, our matrix has as many relations as our extracted opinions. Universal schema computes a representation for each relation and we use these representations to measure the similarity of between different opinions. We have updated section 4.1 to convey this clearly.\n\nR1W8: Q: Redo figures\nA: These figures are updated in the uploaded revision.\n\nR1W9: Q: Typos\nA: We have modified the text to better reflect our point. In fact, the relation specified in the example is implied by the fact that both reviews describe the same movie. \n"}, {"title": "Response to Reviewer #1 [1/2]", "comment": "Please find our response to your questions/comments below. \n\nR1W1. Q: What is the motivation to build a KB with a single relation and entity type?\nA: It\u2019s correct that our constructed KB contains a single relation. However, there are existing KBs in the literature that also contain a single relation. For example, Probase [1] only contains \u201cisA\u201d relation extracted from a text corpus. At the same time, we agree that humans (even when provided with a definition) might disagree over the semantics of the \u201cimplies\u201d relation. However, this is quite common in KBs that deal with common sense. Most notable examples are ATOMIC and ConceptNet where they model relations such as \u201cPersonX is seen as\u201d and \u201cX can be\u201d respectively. These KB are still useful for downstream applications as many studies have shown.\nSimilar to the examples mentioned above, we chose to refer to the output of our system as a knowledge base of opinions with implications as a relation which we automatically mine from a corpus of reviews.\n\nR1W2. Q: What is the use case of the KB?\nA: As mentioned in the paper, opinion KBs like ours are useful for downstream tasks such as reading comprehension tasks (e.g, question answering, sentiment analysis, etc.), retrieval and query expansion. For example, Google\u2019s sentiment analysis tool (see cloud.google.com/natural-language) considers \u201cThe rooms in this hotel have really thin walls\u201d as a neutral statement. Our constructed KB understands that \u201cthin walls\u201d implies \u201cnoisy room\u201d. \u201cNoisy room\u201d has a negative sentiment according to Google\u2019s API. Similarly, understanding the query \u201cShow me a movie with good writing\u201d requires the knowledge that \u201ccomplex characters\u201d, \u201cfast paced action\u201d imply good writing. Hence, instead of searching reviews with \u201cgood writing\u201d as a keyword, a system could expand the search using our KB to additionally look for \u201ccomplex characters\u201d and \u201cfast paced action\u201d. We will convey this more clearly in the introduction of the revision.\n\nR1W3: Q: There are no experiments demonstrating improvements on RC task\nA: BERT is a state-of-the-art model for reading comprehension. Our experiments demonstrate that the implications we discovered are often missed by BERT-like models (as they fail to predict these implications) and that our KB can be used as training data to further tune these models and improve their performance on reading comprehension. We can make this more clear in our abstract.\n\nR1W4: Q: Does proximity between the opinion embeddings indicate implication?\nA: We have updated the paper to reflect this discussion better. The proximity between the opinions embeddings, as the reviewer suggested, identifies similarity. However, once these similarities are computed, we proceed to create a K-nearest neighbor graph. A K-nearest neighbor graph is a directed graph and the direction of edges are what enable Sampo to discover implications. Consider two opinions O1 and O2. There are many cases where O1 is among the top K neighbors of O2, but not vice versa. For instance, we see that \u201cbad breakfast\u201d is among the top 5 neighbors of \u201cburnt coffee taste\u201d, but \u201cburnt coffee taste\u201d does not appear in the top 5 neighbors of \u201cbad breakfast\u201d and more generic opinions such as \u201climited options\u201d, \u201cpoor service\u201d are closer to \u201cbad breakfast\u201d. This helps us identify that there should be a directed edge from \u201cburnt coffee taste\u201d to \u201cbad breakfast\u201d and not vice versa. On the other hand, \u201cpoor breakfast\u201d and \u201cbad breakfast\u201d appear in the top 5 nearest neighbors of each other and we can conclude that the implication edge goes both ways which further implies the two opinions are equivalent. We will write this more clearly in Section 3.3 in the next revision.\n\nR1W5: Q: Definition of implication?\nA: Here is the definition that we provided to the crowdworkers. Given two opinions O1 and O2, we consider O1 \u2192 O2, if opinion O1 implies or contributes to O2. By \u201cimplies\u201d, we mean that O1 is an expression that essentially conveys O2 (e.g., thin walls \u2192 noisy room). By \u201ccontributes\u201d, we mean that O1 is a valid reason for O2 to be true. For instance, \u201cthin walls\u201d \u2192 \u201cbad hotel\u201d falls into this category as having \u201cthin walls\u201d contributes to a hotel being bad but it may not be sufficient to consider the hotel as necessarily bad. \n\nR1W6: Q: What is the inter-annotator agreement?\nA: There are certainly cases where human annotators don\u2019t agree with each other, but in general we observed a good degree of agreement among the annotators. While we have collected a single judgement for almost all mined relationships, we set aside a set of 70 questions (i.e, 10-15 questions per domain) to which all crowdworkers (roughly 1200 participants) must have provided an answer. We have observed a Fleiss Kappa agreement score of 0.76 when considering the following 4 categories: \u201cO1 implies O2\u201d, \u201cO2 implies O1\u201d, \u201cO1 and O2 are equivalent\u201d, \u201cO1 and O2 are unrelated\u201d. We have updated the appendix with the agreement scores to reflect this."}], "comment_replyto": ["ZaiAlP9pbm", "HLDiRAIiYZ6", "uag9wZDNE6Y", "Wy4oFhnST2c", "YN8fkglNA", "eYummtXaII", "IstjG-oXFo6", "aPuaJJ5t6OJ"], "comment_url": ["https://openreview.net/forum?id=YN8fkglNA¬eId=e5Tynnr5dnR", "https://openreview.net/forum?id=YN8fkglNA¬eId=ei5C7B_fwsE", "https://openreview.net/forum?id=YN8fkglNA¬eId=HLDiRAIiYZ6", "https://openreview.net/forum?id=YN8fkglNA¬eId=ZaiAlP9pbm", "https://openreview.net/forum?id=YN8fkglNA¬eId=KSqB9hZ6ODB", "https://openreview.net/forum?id=YN8fkglNA¬eId=uag9wZDNE6Y", "https://openreview.net/forum?id=YN8fkglNA¬eId=WRxRCYAkVR", "https://openreview.net/forum?id=YN8fkglNA¬eId=IstjG-oXFo6"], "meta_review_cdate": 1588301890305, "meta_review_tcdate": 1588301890305, "meta_review_tmdate": 1588341532376, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "This paper addresses the task of unsupervised knowledge base construction. The reviewers like that the authors present a novel unsupervised approach, and are happy with the thorough experiments. However, they also point out that the approach could be motivated better, and that it makes many assumptions that are not explained properly. We recommend acceptance but nudge the authors to consider the reviewer suggestions.", "meta_review_readers": ["everyone"], "meta_review_writers": ["AKBC.ws/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=YN8fkglNA¬eId=NjMhY90ynIC"], "decision": "Accept"} |