{"forum": "SOEJGCE76x", "submission_url": "https://openreview.net/forum?id=SOEJGCE76x", "submission_content": {"keywords": [], "TL;DR": "A method to learn Discrete Knowledge Graph Embeddings", "authorids": ["AKBC.ws/2020/Conference/Paper76/Authors"], "title": "Knowledge Graph Embedding Compression", "authors": ["Anonymous"], "pdf": "/pdf/30e315bfdffb004fa34be3c71a36af66cf6b7501.pdf", "subject_areas": ["Databases", "Knowledge Representation, Semantic Web and Search", "Relational AI"], "abstract": "Knowledge graph (KG) representation learning techniques that learn continuous embeddings of entities and relations in the KG have become popular in many AI applications. With a large KG, the embeddings consume a large amount of storage and memory. This is problematic and prohibits the deployment of these techniques in many real world settings. Thus, we propose an approach that compresses the KG embedding layer by representing each entity in the KG as a vector of discrete codes and then composes the embeddings from these codes. The approach can be trained end-to-end with simple modifications to any existing KG embedding technique. We evaluate the approach on various standard KG embedding evaluations and show that it achieves 50-1000x compression of embeddings with a minor loss in performance. The compressed embeddings also retain the ability to perform various reasoning tasks such as KG inference.", "paperhash": "anonymous|knowledge_graph_embedding_compression", "archival_status": "Non-Archival"}, "submission_cdate": 1581705814563, "submission_tcdate": 1581705814563, "submission_tmdate": 1588627591434, "submission_ddate": null, "review_id": ["heC7drmmAIy", "Mf5UUzXK-xb", "sAg9K6aDdAc"], "review_url": ["https://openreview.net/forum?id=SOEJGCE76x¬eId=heC7drmmAIy", "https://openreview.net/forum?id=SOEJGCE76x¬eId=Mf5UUzXK-xb", "https://openreview.net/forum?id=SOEJGCE76x¬eId=sAg9K6aDdAc"], "review_cdate": [1585317612411, 1585421254821, 1585552594282], "review_tcdate": [1585317612411, 1585421254821, 1585552594282], "review_tmdate": [1585695504065, 1585695503785, 1585695503511], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2020/Conference/Paper76/AnonReviewer2"], ["AKBC.ws/2020/Conference/Paper76/AnonReviewer3"], ["AKBC.ws/2020/Conference/Paper76/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["SOEJGCE76x", "SOEJGCE76x", "SOEJGCE76x"], "review_content": [{"title": "Well-written paper with some oversights", "review": "This paper proposes to replace a typical knowledge graph embedding approach that produces a (typically large) continuous representation with a (small) discrete representation (a KD code) and a variety of encoder/reconstruction functions. It explores several combinations of encoding and reconstruction and shows benefit on compression metrics while maintaining performance on other tasks. \n\nOverall, I would say that this is a well-written paper with some oversights. Addressing them would strengthen its case for acceptance. The work itself ignores the transformer architecture, which seems an obvious candidate for the non-linear reconstruction element. \n\nQuality:\nThe paper itself seems well-written and addresses most obvious concerns with the work. It misses some related work, and crucially, ignores the transformer as a choice for reconstruction.\n\nMissing related work: Key-Value Memory Networks for Directly Reading Documents (Miller et al., 2016)\nPyTorch has a paper to cite from NeurIPS 2019: \u201cPyTorch: An Imperative Style, High-Performance Deep Learning Library\u201d\n\nClairity:\nThe paper itself is generally very clear.\n\nFurther elaboration about the particular choice of pseudo gradient for the tempering softmax is needed. Why not use the equivalent of the \u201cStraight-through Gumbel-softmax Estimator\u201d from (Jang et al., 2016) instead of this pseudo gradient trick?\n\nOriginality:\nWhile both partitioning embedding spaces and the particular learning methods are not novel, the authors do combine them in an interesting way.\n\nSignificance:\nIt is hard to project the impact of any particular work. This particular paper has potential for helping mobile device (and other resource-constrained) users.\n\nPros:\nRelatively thorough related work review\nLarge gains in compression ratios\n\nCons:\nSignificance gains on the primary tasks are marginal\nUses an LSTM but not a transformer for encoding reconstruction. Substitution of a transformer for an LSTM in PyTorch should be straightforward.\n", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Appropriate use of discrete representation learning for KG embedding compression ", "review": "Summary: This work combines knowledge graph (KG) embedding learning with discrete representation learning to compress KG embeddings. A discretization function converts continuous embeddings into a discrete KD code and a reverse-discretization function reconstructs the continuous embeddings. Two discretization training approaches (Vector Quantization [VQ] and Tempering softmax[TS] ) and reverse-discretization functions (Codebook Lookup [CL] and Nonlinear reconstruction [NL]) are proposed. The four resulting combinations are empirically evaluated for link prediction and logical inference tasks, with TS-NL performing best. TS-NL outperforms continuous counterpart on the logical inference task. Furthermore, authors run ablations on the size of the KD code and propose training guidance from continuous embeddings for faster convergence.\n\nPros:\n- Discrete KD code representation confers desirable properties of interpretability - semantically similar entities are assigned nearby codes\n- The discretization learning method proposed here can be combined with different KG embedding learning techniques\n- Results suggest minimal performance decline across multiple KG applications and up to 1000x compression.\n\nQuestions:\n- Is the LSTM used in the NL technique a bidirectional LSTM? If no, have you experimented with BiLSTMS since there seems to be nothing inherently unidirectional about the discretization function? If yes, is that the reason for two sets of parameter matrices for I/O/F gates in your LSTM model?\n- Is the continuous embedding dimension and the dimension of the embeddings obtained after reverse-discretization \n comparable?\n- How much more additional inference cost does your method need over the continuous embedding approach? \n\n", "rating": "8: Top 50% of accepted papers, clear accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Effective techniques for compression knowledge graph embeddings without much loss in performance ", "review": "This paper proposes a compression method for knowledge graph embeddings. It learns discretization and reverse-discretization functions to map continuous vectors to discrete vectors and achieves huge storage reduction without much loss in performance. The learned representations are also shown to improve logical inference tasks for knowledge graph. \n\nIn general, the paper is well written. The description of the method is clear and the experiments are pretty thorough. The results are encouraging, as they show that with a very high compression rate, the model performs on par with the uncompressed model, sometimes even better. \n\nOne concern is the inference time and additional complexity with the introduction of LSTM-based reconstruction model, although the authors have shown that it only contributes a small runtime empirically. The LSTM module also introduces more hyper parameters. I wonder how they impact the compression performance.\n\nSome findings in the experiments are interesting. The discrete representations sometimes significantly outperform the continuous representations in the logical inference tasks. It would be nice to see some concrete examples and more analysis. Also, adding the regularization term helps a lot in terms of faster convergence. Is that true for all the KG embedding methods? How much performance gain it provides in general?", "rating": "8: Top 50% of accepted papers, clear accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1588281389380, "meta_review_tcdate": 1588281389380, "meta_review_tmdate": 1588341532900, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "This paper studies the problem of compressing KG embeddings, and suggests learning to discretize the embeddings and also to undo this discretization. The paper shows this helps the memory requirements without significant loss in quality.\n\nAll reviewers noted that the paper is well written and presents an interesting solution to the problem of large models. The authors' responses most of the questions raised in the reviews.\n", "meta_review_readers": ["everyone"], "meta_review_writers": ["AKBC.ws/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=SOEJGCE76x¬eId=lIANK-bTxYN"], "decision": "Accept"}