AMSR / conferences_raw /akbc19 /AKBC.ws_2019_Conference_HygfXWqTpm.json
mfromm's picture
Upload 3539 files
fad35ef
{"forum": "HygfXWqTpm", "submission_url": "https://openreview.net/forum?id=HygfXWqTpm", "submission_content": {"title": "SHINRA: Structuring Wikipedia by Collaborative Contribution", "authors": ["Satoshi Sekine", "Akio Kobayashi", "Kouta Nakayama"], "authorids": ["[email protected]", "[email protected]", "[email protected]"], "keywords": ["Resource construction", "Structured Wikipedia"], "abstract": "We are reporting the SHINRA project, a project for structuring Wikipedia with collaborative construction scheme. The goal of the project is to create a huge and well-structured knowledge base to be used in NLP applications, such as QA, Dialogue systems and explainable NLP systems. It is created based on a scheme of \u201dResource by Collaborative Contribution (RbCC)\u201d. We conducted a shared task of structuring Wikipedia, and at the same, submitted results are used to construct a knowledge base.\nThere are machine readable knowledge bases such as CYC, DBpedia, YAGO, Freebase Wikidata and so on, but each of them has problems to be solved. CYC has a coverage problem, and others have a coherence problem due to the fact that these are based on Wikipedia and/or created by many but inherently incoherent crowd workers. In order to solve the later problem, we started a project for structuring Wikipedia using automatic knowledge base construction shared-task.\nThe automatic knowledge base construction shared-tasks have been popular and well studied for decades. However, these tasks are designed only to compare the performances of different systems, and to find which system ranks the best on limited test data. The results of the participated systems are not shared and the systems may be abandoned once the task is over.\nWe believe this situation can be improved by the following changes:\n1. designing the shared-task to construct knowledge base rather than evaluating only limited test data\n2. making the outputs of all the systems open to public so that we can run ensemble learning to create the better results than the best systems\n3. repeating the task so that we can run the task with the larger and better training data from the output of the previous task (bootstrapping and active learning)\nWe conducted \u201cSHINRA2018\u201d with the above mentioned scheme and in this paper\nwe report the results and the future directions of the project. The task is to extract the values of the pre-defined attributes from Wikipedia pages. We have categorized most of the entities in Japanese Wikipedia (namely 730 thousand entities) into the 200 ENE categories. Based on this data, the shared-task is to extract the values of the attributes from Wikipedia pages. We gave out the 600 training data and the participants are required to submit the attribute-values for all remaining entities of the same category type. Then 100 data out of them for each category are used to evaluate the system output in the shared-task.\nWe conducted a preliminary ensemble learning on the outputs and found 15 F1 score improvement on a category and the average of 8 F1 score improvements on all 5 categories we tested over a strong baseline. Based on this promising results, we decided to conduct three tasks in 2019; multi-lingual categorization task (ML), extraction for the same 5 categories in Japanese with a larger training data (JP-5) and extraction for 34 new categories in Japanese (JP-34).\n", "archival status": "Archival", "subject areas": ["Natural Language Processing", "Information Extraction", "Information Integration", "Crowd-sourcing", "Other"], "pdf": "/pdf/89d196db9b1ad333807587fad867ef76a10673f2.pdf", "paperhash": "sekine|shinra_structuring_wikipedia_by_collaborative_contribution", "TL;DR": "We introduce a \"Resource by Collaborative Construction\" scheme to create KB, structured Wikipedia ", "_bibtex": "@inproceedings{\nsekine2019shinra,\ntitle={{\\{}SHINRA{\\}}: Structuring Wikipedia by Collaborative Contribution},\nauthor={Satoshi Sekine and Akio Kobayashi and Kouta Nakayama},\nbooktitle={Automated Knowledge Base Construction (AKBC)},\nyear={2019},\nurl={https://openreview.net/forum?id=HygfXWqTpm}\n}"}, "submission_cdate": 1542459674310, "submission_tcdate": 1542459674310, "submission_tmdate": 1580939652454, "submission_ddate": null, "review_id": ["S1xnGsw7fN", "r1gc2eYBWV", "BkgGPkXtMN"], "review_url": ["https://openreview.net/forum?id=HygfXWqTpm&noteId=S1xnGsw7fN", "https://openreview.net/forum?id=HygfXWqTpm&noteId=r1gc2eYBWV", "https://openreview.net/forum?id=HygfXWqTpm&noteId=BkgGPkXtMN"], "review_cdate": [1547037459795, 1546125490325, 1547411289844], "review_tcdate": [1547037459795, 1546125490325, 1547411289844], "review_tmdate": [1550269653828, 1550269653619, 1550269630186], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["HygfXWqTpm", "HygfXWqTpm", "HygfXWqTpm"], "review_content": [{"title": "A practical work but lacks methodological contribution", "review": "This paper introduces a project for structuring Wikipedia by aggregating the outputs from different systems through ensemble learning. It presents a case study of entity and attribute extraction from Japanese Wikipedia. \n\nMy major concern is the lack of methodological contribution. \n- Ensemble learning, which seems most like the methodological contribution, is applied in a straightforward way. The finding that ensemble learning gives better results than individual learners is trivial.\n- Authors state that a key feature of the project is using bootstrapping or active learning. This, however, is not explained in the paper nor supported by experimental results.\n\nClarification or details are needed for steps introduced by section 3-6:\n- In \"Extended Named Entity\", why would the top-down ontology ENE better than the inferred or crowd created ones? I think each of them has pros and cons.\n- In \"Categorization of Wikipedia Entities\", is training data created by multiple annotators? what is the agreement between the multiple annotators for the test (and the training) data? How much error of the machine learning model is caused by incorrect human annotations?\n- In \"Share-Task Definition\", \"We give out 600 training data for each category.\" does it mean 600 entities?\n- In \"Building the Data\", what is the performance of experts and crowds in the different stages?\n\nWriting should be improved. Some examples:\n- what does it mean by \"15 F1 score improvement on a category\".\n- a lot of text in the abstract is repeated in the introduction.\n- \"For example, \u201dShinjuku Station\u201d is a kind of railway station is a type of ...\": not a sentence.\n- \"4 show the most frequent categories\": should be Table 1.\n- page 8, \"n \u00bf t\"L corrupted symbol.\n\nAs the last comment, I wonder how (much) this ensemble learning method can be better than crowd based KBC methods, as motivated by abstract and introduction. I would assume that machine learning has similar reliability issue as crowdsourcing even when ensemble learning is used. ", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "The paper reports a summary of the SHINRA project for structuring Wikipedia collaborative construction scheme.", "review": "The paper described SHINRA2018 task that construct knowledge base rather than evaluating only limited test data. The paper repeat the task with larger and better training data from the output of the previous task. \n\nThe paper is well written in general, though there are some redundancies between abstract and introduction with exactly the same content. The SHINRA share task provide a good resource and platform for evaluating knowledge graph construction task on Japanese Wikipedia. \n\nOne of the concerns is that the paper did not really solve the first statement in abstract that it still evaluates on limited test data with 100 samples. The main contribution of the paper seems to be ensemble learning which has been proved efficient in many previous work.", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "The paper described a information extraction task, but too many questions are unanswered", "review": "The paper tackles an important problem: extraction of structured data from unstructured text, but lack of comparison with existing approaches.\n\nSection 1\nWikipedia is not only a knowledge based of the names in the world. Maybe the authors wanted to say the \"entities of the world\"?\nThe motivation of the paper is limited: what is the goal of the structured knowledge base? I the goal is better consistency, how to we improve consistency. There is nothing in the paper that indicate that the consistency is better than, say, manually created data with a voting mechanism. Why this approach to KB structuring is inherently coherent? \n\n\nSection 2\nIf the only problem of CYC is the coverage, why the authors did not try to improve the coverage of CYC instead of inventing a new method?\n\nSection 3\nWhy top-down design of ontology is needed. If the authors have learned it, what are the supporting evidence for it?\n\nSection 4\nNo annotation reliability (e.g. the inter-annotator agreement score). \n\nSection 5\nWhy \"chemical compound\" was selected and not \"movie\" or \"building\" as more common sub-categories?\n600 data points in quite small compared to standard datasets. What was the cost of the annotations?\n\nSection 6\nWhy the Workers (Lancers) were not used? What was their accuracy/cost ? Maybe the cost could compensate the low accuracy.\n\nSection 7\nWhy is it scientifically interesting to know that the authors are happy ?\n\nSection 8\nWhy did the authors participate in the shared task? \nWhat are the references for stacking? \nAs far as I know, stacking performs poorly compared to proper inference techniques, such as CRF. Why is it different in this case? \n\nOverall: the English writing is very approximate. I'm not a native speaker myself, but I would suggest the authors to send the paper to a native English speaker for correction.", "rating": "4: Ok but not good enough - rejection", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1549981993335, "meta_review_tcdate": 1549981993335, "meta_review_tmdate": 1551128369862, "meta_review_ddate ": null, "meta_review_title": "Interesting topic but still not mature presentation", "meta_review_metareview": "As it is clear from the reviewers comments, and also the rebuttal responses, there are still significant amount of points to improve in the paper. However, I believe it is going to be an interesting poster presentation ", "meta_review_readers": ["everyone"], "meta_review_writers": [], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=HygfXWqTpm&noteId=rJgWVtUgSV"], "decision": "Accept (Poster)"}