doc_id
stringlengths 9
9
| text
sequence | labels
sequence |
---|---|---|
rySfFbFgz | [
"In this paper, the authors propose a novel tracking loss to convert the RPN to a tracker. ",
"The internal structure of top layer features of RPN is exploited to treat feature points discriminatively. ",
"In addition, the proposed compression network speeds up the tracking algorithm. ",
"The experimental results on the VOT2016 dataset demonstrate its efficiency in tracking. ",
"This work is the combination of Faster R-CNN (Ren et al. PAMI 2015) and tracking-by-detection framework. ",
"The main contributions proposed in this paper are new tracking loss, network compression and results. ",
"There are numerous concerns with this work:",
"1.\tThe new tracking loss shown in equation 2 is similar with the original Faster R-CNN loss shown in equation 1. ",
"The only difference is to replace the regression loss with a predefined mask selection loss, ",
"which is of little sense that the feature processing can be further fulfilled through one-layer CNN. ",
"The empirical operation shown in figure 2 seems arbitrary and lack of theoretical explanation. ",
"There is no insight of why doing so. ",
"Simply showing the numbers in table 1 does not imply the necessity, ",
"which ought to be put in the experiment sections. ",
"2.\tThe network compression is engineering and lack insight as well. ",
"To remove part of the CNN and retrain is a common strategy in the CNN compression methods [a] [b]. ",
"There is a lack of discussion with the relationship with prior arts.",
"3.\tThe organization is not clear. ",
"Section 3.4 should be set in the experiments ",
"and Section 3.5 should be set at the beginning of the algorithm. ",
"The description of the network compression is not clear enough, especially the training details. ",
"Meanwhile, the presentation is hard to follow. ",
"There is no clear expression of how the tracker performs in practice.",
"4.\tIn addition, VOT 2016, the method should evaluate on the OTB dataset with the following trackers [c] [d].",
"5.\tThe evaluation is not fair. ",
"In Sec 6, the authors indicate that MDNet runs at 1FPS while the proposed tracker runs at 1.6FPS. ",
"However, MDNet is based on Matlab ",
"and the proposed tracker is based on C++ (i.e., Caffe).",
"Reference:[a] On Compressing Deep Models by Low Rank and Sparse Decomposition. Yu et al. CVPR 2017.",
"[b] Designing Energy-Efficient Convolutional Neural Network Using Energy-Aware Pruning. Yang et al. CVPR 2017.",
"[c] ECO: Efficient Convolution Operators for Tracking. Danelljan et al. CVPR 2017.",
"[d] Multi-Task Correlation Particle Filter For Robust Object Tracking. Zhang et al. CVPR 2017."
] | [
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"fact",
"request",
"evaluation",
"evaluation",
"fact",
"evaluation",
"request",
"request",
"evaluation",
"evaluation",
"evaluation",
"request",
"evaluation",
"fact",
"fact",
"fact",
"reference",
"reference",
"reference",
"reference"
] |
S15xOyjgf | [
"This paper proposes an evolutionary algorithm for solving the variational E step in expectation-maximization algorithm for probabilistic models with binary latent variables. ",
"This is done by (i) considering the bit-vectors of the latent states as genomes of individuals, and by (ii) defining the fitness of the individuals as the log joint distribution of the parameters and the latent space.",
"Pros:The paper is well written and the methodology presented is largely clear.",
"Cons:While the reviewer is essentially fine with the idea of the method, ",
"the reviewer is much less convinced of the empirical study. ",
"There is no comparison with other methods such as Monte carlo sampling.",
"It is not clear how computationally Evolutionary EM performs comparing to Variational EM algorithm ",
"and there is neither experimental results nor analysis for the computational complexity of the proposed model.",
"The datasets used in the experiments are quite old. ",
"The reviewer is concerned that these datasets may not be representative of real problems.",
"The applicability of the method is quite limited. ",
"The proposed model is only applicable for the probabilistic models with binary latent variables, ",
"hence it cannot be applied to more realistic complex model with real-valued latent variables."
] | [
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact"
] |
H13MWgq4M | [
"This paper identifies and proposes a fix for a shortcoming of the Deep Information Bottleneck approach, namely that the induced representation is not invariant to monotonic transform of the marginal distributions (as opposed to the mutual information on which it is based). ",
"The authors address this shortcoming by applying the DIB to a transformation of the data, obtained by a copula transform. ",
"This explicit approach is shown on synthetic experiments to preserve more information about the target, yield better reconstruction and converge faster than the baseline. ",
"The authors further develop a sparse extension to this Deep Copula Information Bottleneck (DCIB), which yields improved representations (in terms of disentangling and sparsity) on a UCI dataset.",
"(significance) This is a promising idea. ",
"This paper builds on the information theoretic perspective of representation learning, ",
"and makes progress towards characterizing what makes for a good representation. ",
"Invariance to transforms of the marginal distributions is clearly a useful property, ",
"and the proposed method seems effective in this regard.",
"Unfortunately, I do not believe the paper is ready for publication as it stands, ",
"as it suffers from lack of clarity and the experimentation is limited in scope.",
"(clarity) While Section 3.3 clearly defines the explicit form of the algorithm ",
"(where data and labels are essentially pre-processed via a copula transform), ",
"details regarding the “implicit form” are very scarce. ",
"From Section 3.4, it seems as though the authors are optimizing the form of the gaussian information bottleneck I(x,t), in the hopes of recovering an encoder $f_\\beta(x)$ which gaussianizes the input (thus emulating the explicit transform) ? ",
"Could the authors clarify whether this interpretation is correct, or alternatively provide additional clarifying details ? ",
"There are also many missing details in the experimental section: ",
"how were the number of “active” components selected ? ",
"Which versions of the algorithm (explicit/implicit) were used for which experiments ? ",
"I believe explicit was used for Section 4.1, and implicit for 4.2 ",
"but again this needs to be spelled out more clearly. ",
"I would also like to see a discussion (and perhaps experimental comparison) to standard preprocessing techniques, such as PCA-whitening.",
"(quality) The experiments are interesting and seem well executed. ",
"Unfortunately, I do not think their scope (single synthetic, plus a single UCI dataset) is sufficient. ",
"While the gap in performance is significant on the synthetic task, ",
"this gap appears to shrink significantly when moving to the UCI dataset. ",
"How does this method perform for more realistic data, even e.g. MNIST ? ",
"I think it is crucial to highlight that the deficiencies of DIB matter in practice, and are not simply a theoretical consideration. ",
"Similarly, the representation analyzed in Figure 7 is promising, ",
"but again the authors could have targeted other common datasets for disentangling, e.g. the simple sprites dataset used in the beta-VAE paper. ",
"I would have also liked to see a more direct and systemic validation of the claims made in the paper. ",
"For example, the shortcomings of DIB identified in Section 3.1, 3.2 could have been verified more directly by plotting I(y,t) for various monotonic transformations of x. ",
"A direct comparison of the explicit and implicit forms of the algorithms would also also make for a stronger paper in my opinion.",
"Pros:* Theoretically well motivated",
"* Promising results on synthetic task",
"* Potential for impact",
"Cons:* Paper suffers from lack of clarity (method and experimental section)",
"* Lack of ablative / introspective experiments",
"* Weak empirical results (small or toy datasets only)."
] | [
"fact",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"fact",
"request",
"evaluation",
"request",
"request",
"fact",
"request",
"request",
"evaluation",
"evaluation",
"evaluation",
"fact",
"request",
"request",
"evaluation",
"request",
"request",
"request",
"request",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation"
] |
Hk_LRZ5gG | [
"This paper proposes several client-server neural network gradient update strategies aimed at reducing uplink usage while maintaining prediction performance.",
"The main approaches fall into two categories: structured, where low-rank/sparse updates are learned,",
"and sketched, where full updates are either sub-sampled or compressed before being sent to the central server.",
"Experiments are based on the federated averaging algorithm.",
"The work is valuable, but has room for improvement.",
"The paper is mainly an empirical comparison of several approaches, rather than from theoretically motivated algorithms.",
"This is not a criticism,",
"however, it is difficult to see the reason for including the structured low-rank experiments in the paper",
"(itAs a reader, I found it difficult to understand the actual procedures used.",
"For example, what is the difference between the random mask update and the subsampling update",
"(why are there no random mask experiments after figure 1, even though they performed very well)?",
"How is the structured update \"learned\"?",
"It would be very helpful to include algorithms.",
"It seems like a good strategy is to subsample, perform Hadamard rotation, then quantise.",
"For quantization, it appears that the HD rotation is essential for CIFAR, but less important for the reddit data.",
"It would be interesting to understand when HD works and why,",
"and perhaps make the paper more focused on this winning strategy, rather than including the low-rank algo.",
"If convenient, could the authors comment on a similarly motivated paper under review at iclr 2018:",
"VARIANCE-BASED GRADIENT COMPRESSION FOR EFFICIENT DISTRIBUTED DEEP LEARNING",
"pros:- good use of intuition to guide algorithm choices",
"- good compression with little loss of accuracy on best strategy",
"- good problem for FA algorithm / well motivated",
"cons:- some experiment choices do not appear well motivated / inclusion is not best choice",
"- explanations of algos / lack of 'algorithms' adds to confusion",
"a useful reference: Strom, Nikko. \"Scalable distributed dnn training using commodity gpu cloud computing.\" Sixteenth Annual Conference of the International Speech Communication Association. 2015."
] | [
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"non-arg",
"evaluation",
"evaluation",
"non-arg",
"evaluation",
"non-arg",
"request",
"request",
"evaluation",
"request",
"request",
"request",
"reference",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"reference"
] |
BJDxbMvez | [
"The authors propose a generative method that can produce images along a hierarchy of specificity, i.e. both when all relevant attributes are specified, and when some are left undefined, creating a more abstract generation task.",
"Pros:+ The results demonstrating the method's ability to generate results for (1) abstract and (2) novel/unseen attribute descriptions, are generally convincing.",
"Both quantitative and qualitative results are provided.",
"+ The paper is fairly clear.",
"Cons:- It is unclear how to judge diversity qualitatively, e.g. in Fig. 4(b).",
"- Fig. 5 could be more convincing;",
"\"bushy eyebrows\" is a difficult attribute to judge,",
"and in the abstract generation when that is the only attribute specified, it is not clear how good the results are."
] | [
"fact",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation"
] |
rycISJNgz | [
"Quality The method description, particularly about reference ambiguity, I found difficult to follow.",
"The experiments and analysis look solid,",
"although it would be nice to see experiments on more challenging natural image datasets.",
"Clarity “In general this is not possible… “ -",
"you are saying it is not possible to learn an encoder that recovers disentangled factors of variation?",
"But that seems to be one of the main goals of the paper.",
"It is not clear at all what is meant here or what the key problem is,",
"which detracts from the paper’s motivation.",
"What is the purpose of R_v and R_c in eq 2?",
"Why can these not be collapsed into the encoders N_v and N_c?",
"What does “different common factor” mean?",
"What is f_c in proof of proposition 1?",
"Previously f (no subscript) was referred to as a rendering engine.",
"T(v,c) ~ p_v and c ~ p_c are said to be independent.",
"But T(v,c) is explicitly defined in terms of c (equation 6).",
"So which is correct?",
"Overall the argument seems plausible -",
"pairs of images in which a single factor of variation changes have a reference ambiguity -",
"but the details are unclear.",
"Originality The model is very similar to Mathieu et al, although using image pairs rather than category labels directly.",
"The idea of weakly-supervised disentangling has also been explored in many other papers,",
"e.g. “Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis”, Yang et al.",
"The description of reference ambiguity seems new and potentially valuable,",
"but I did not find it easy to follow.",
"Significance Disentangling factors of variation with weak supervision is an important problem,",
"and this paper makes a modest advance in terms of the model and potentially in terms of the theory.",
"The analysis in figure 3 I found particularly interesting - illustrating that the encoder embedding dimension can have a drastic effect on the shortcut problem.",
"Overall I think this can be a significant contribution if the exposition can be improved.",
"Pros- Proposed method allows disentangling two factors of variation given a training set of image pairs with one factor of variation matching and the other non-matching.",
"- A challenge inherent to weakly supervised disentangling called reference ambiguity is described.",
"Cons- Only two factors of variation are studied,",
"and the datasets are fairly simple.",
"- The method description and the description of reference ambiguity are unclear."
] | [
"evaluation",
"evaluation",
"evaluation",
"quote",
"fact",
"fact",
"evaluation",
"fact",
"request",
"request",
"request",
"request",
"fact",
"fact",
"fact",
"request",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"reference",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"request",
"fact",
"fact",
"fact",
"evaluation",
"evaluation"
] |
SJdWxzoxz | [
"Summary:The paper presents a novel method for answering “How many …?” questions in the VQA datasets. ",
"Unlike previously proposed approaches, the proposed method uses an iterative sequential decision process for counting the relevant entity. ",
"The proposed model makes discrete choices about what to count at each time step. ",
"Another qualitative difference compared to existing approaches is that the proposed method returns bounding boxes for the counted object. ",
"The training and evaluation of the proposed model and baselines is done on a subset of the existing VQA dataset that consists of “How many …?” questions. ",
"The experimental results show that the proposed model outperforms the baselines discussed in the paper.",
"Strengths:1.\tThe idea of sequential counting is novel and interesting.",
"2.\tThe analysis of model performance by grouping the questions as per frequency with which the counting object appeared in the training data is insightful. ",
"Weaknesses:1.\tThe proposed dataset consists of 17,714 QA pairs in the dev set, whereas only 5,000 QA pairs in the test set. ",
"Such a 3.5:1 split of dev and test seems unconventional. ",
"Also, the size of the test set seems pretty small given the diversity of the questions in the VQA dataset.",
"2.\tThe paper lacks quantitative comparison with existing models for counting such as with Chattopadhyay et al. ",
"This would require the authors to report the accuracies of existing models by training and evaluating on the same subset as that used for the proposed model. ",
"Absence of such a comparison makes it difficult to judge how well the proposed model is performing compared to existing models.",
"3.\tThe paper lacks analysis on how much of performance improvement is due to visual genome data augmentation and pre-training? ",
"When comparing with existing models (as suggested in above), this analysis should be done, so as to identify the improvements coming from the proposed model alone.",
"4.\tThe paper does not report the variation in model performance when changing the weights of the various terms involved in the loss function (equations 15 and 16).",
"5.\tRegarding Chattopadhyay et al. the paper says that “However, their analysis was limited to the specific subset of examples where their approach was applicable.”",
"It would be good it authors could elaborate on this a bit more.",
"6.\tThe relation prediction part of the vision module in the proposed model seems quite similar to the Relation Networks, ",
"but the paper does not mention Relation Networks. ",
"It would be good to cite the Relation Networks paper and state clearly if the motivation is drawn from Relation Networks.",
"7.\tIt is not clear what are the 6 common relationships that are being considered in equation 1. ",
"Could authors please specify these?",
"8.\tIn equation 1, if only 6 relationships are being considered, then why does f^R map to R^7 instead of R^6?",
"9.\tIn equations 4 and 5, it is not clarified what each symbol represents, making it difficult to understand.",
"10.\tWhat is R in equation 15? ",
"Is it reward?",
"Overall:The paper proposes a novel and interesting idea for solving counting questions in the Visual Question Answering tasks. ",
"However, the writing of the paper needs to be improved to make is easier to follow. ",
"The experimental set-up – the size of the test dataset seems too small. ",
"And lastly, the paper needs to add comparisons with existing models on the same datasets as used for the proposed model. ",
"So, the paper seems to be not ready for the publication yet."
] | [
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"request",
"evaluation",
"fact",
"request",
"fact",
"fact",
"request",
"evaluation",
"fact",
"request",
"evaluation",
"request",
"evaluation",
"evaluation",
"request",
"non-arg",
"evaluation",
"request",
"evaluation",
"request",
"evaluation"
] |
SyKUVctlM | [
"This paper proposes a recurrent neural network for visual question answering. ",
"The recurrent neural network is equipped with a carefully designed recurrent unit called MAC (Memory, Attention and Control) cell, which encourages sequential reasoning by restraining interaction between inputs and its hidden states. ",
"The proposed model shows the state-of-the-art performance on CLEVR and CLEVR-Humans dataset, which are standard benchmarks for visual reasoning problem. ",
"Additional experiments with limited training data shows the data efficiency of the model, which supports its strong generalization ability.",
"The proposed model in this paper is designed with reasonable motivations and shows strong experimental results in terms of overall accuracy and the data efficiency. ",
"However, an issue in the writing, usage of external component and lack of experimental justification of the design choices hinder the clear understanding of the proposed model.",
"An issue in the writing Overall, the paper is well written and easy to understand, ",
"but Section 3.2.3 (The Write Unit) has contradictory statements about their implementation. ",
"Specifically, they proposed three different ways to update the memory (simple update, self attention and memory gate), ",
"but it is not clear which method is used in the end.",
"Usage of external component The proposed model uses pretrained word vectors called GloVE, which has boosted the performance on visual question answering. ",
"This experimental setting makes fair comparison with the previous works difficult ",
"as the pre-trained word vectors are not used for the previous works. ",
"To isolate the strength of the proposed reasoning module, I ask to provide experiments without pretrained word vectors.",
"Lack of experimental justification of the design choices The proposed recurrent unit contains various design choices such as separation of three different units (control unit, read unit and memory unit), attention based input processing and different memory updates stem from different motivations. ",
"However, these design choices are not justified well ",
"because there is neither ablation study nor visualization of internal states. ",
"Any analysis or empirical study on these design choices is necessary to understand the characteristics of the model. ",
"Here, I suggest to provide few visualizations of attention weights and ablation study that could support indispensability of the design choices."
] | [
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"evaluation",
"fact",
"evaluation",
"fact",
"request",
"fact",
"evaluation",
"fact",
"request",
"request"
] |
Bk8FeZjgf | [
"Instead of either optimization-based variational EM or an amortized inference scheme implemented via a neural network as in standard VAE models, this paper proposes a hybrid approach that essentially combines the two.",
"In particular, the VAE inference step, i.e., estimation of q(z|x), is conducted via application of a recent learning-to-learn paradigm",
"(Andrychowicz et al., 2016),",
"whereby direct gradient ascent on the ELBO criteria with respect to moments of q(z|x) is replaced with a neural network that iteratively outputs new parameter estimates using these gradients.",
"The resulting iterative inference framework is applied to a couple of small datasets and shown to produce both faster convergence and a better likelihood estimate.",
"Although probably difficult for someone to understand that is not already familiar with VAE models,",
"I felt that this paper was nonetheless clear and well-presented, with a fair amount of useful background information and context.",
"From a novelty standpoint though, the paper is not especially strong",
"given that it represents a fairly straightforward application of",
"(Andrychowicz et al., 2016).",
"Indeed the paper perhaps anticipates this perspective and preemptively offers that \"variational inference is a qualitatively different optimization problem\" than that considered in (Andrychowicz et al., 2016), and also that non-recurrent optimization models are being used for the inference task, unlike prior work.",
"But to me, these are rather minor differentiating factors,",
"since learning-to-learn is a quite general concept already,",
"and the exact model structure is not the key novel ingredient.",
"That being said, the present use for variational inference nonetheless seems like a nice application,",
"and the paper presents some useful insights such as Section 4.1 about approximating posterior gradients.",
"Beyond background and model development, the paper presents a few experiments comparing the proposed iterative inference scheme against both variational EM, and pure amortized inference as in the original, standard VAE.",
"While these results are enlightening,",
"most of the conclusions are not entirely unexpected.",
"For example, given that the model is directly trained with the iterative inference criteria in place,",
"the reconstructions from Fig. 4 seem like exactly what we would anticipate, with the last iteration producing the best result.",
"It would certainly seem strange if this were not the case.",
"And there is no demonstration of reconstruction quality relative to existing models,",
"which could be helpful for evaluating relative performance.",
"Likewise for Fig. 6,",
"where faster convergence over traditional first-order methods is demonstrated;",
"but again, these results are entirely expected",
"as this phenomena has already been well-documented in",
"(Andrychowicz et al., 2016).",
"In terms of Fig. 5(b) and Table 1, the proposed approach does produce significantly better values of the ELBO critera;",
"however, is this really an apples-to-apples comparison?",
"For example, does the standard VAE have the same number of parameters/degrees-of-freedom as the iterative inference model, or might eq. (4) involve fewer parameters than eq. (5) since there are fewer inputs?",
"Overall, I wonder whether iterative inference is better than standard inference with eq. (4), or whether the recurrent structure from eq. (5) just happens to implicitly create a better neural network architecture for the few examples under consideration.",
"In other words, if one plays around with the standard inference architecture a bit, perhaps similar results could be obtained.",
"Other minor comment:* In Fig. 5(a), it seems like the performance of the standard inference model is still improving",
"but the iterative inference model has mostly saturated.",
"* A downside of the iterative inference model not discussed in the paper is that it requires computing gradients of the objective even at test time,",
"whereas the standard VAE model would not."
] | [
"fact",
"fact",
"reference",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"reference",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"fact",
"fact",
"evaluation",
"fact",
"reference",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"fact"
] |
H1wVDrtgM | [
"This paper tried to analyze the subspaces of the adversarial examples neighborhood. ",
"More specifically, the authors used Local Intrinsic Dimensionality to analyze the intrinsic dimensional property of the subspaces. ",
"The characteristics and theoretical analysis of the proposed method are discussed and explained. ",
"This paper helps others to better understand the vulnerabilities of DNNs."
] | [
"fact",
"fact",
"fact",
"evaluation"
] |
BkMvqjYgG | [
"This paper focuses on the problem of \"machine teaching\", i.e., how to select a good strategy to select training data points to pass to a machine learning algorithm, for faster learning. ",
"The proposed approach leverages reinforcement learning by defining the reward as how fast the learner learns, and use policy gradient to update the teacher parameters. ",
"I find the definition of the \"state\" in this case very interesting. ",
"The experimental results seem to show that such a learned teacher strategy makes machine learning algorithms learn faster. ",
"Overall I think that this paper is decent. ",
"The angle the authors took is interesting (essentially replacing one level of the bi-level optimization problem in machine teaching works with a reinforcement learning setup). ",
"The problem formulation is mostly reasonable, ",
"and the evaluation seems quite convincing. ",
"The paper is well-written: ",
"I enjoyed the mathematical formulation (Section 3). ",
"The authors did a good job of using different experiments (filtration number analysis, and teaching both the same architecture and a different architecture) to intuitively explain what their method actually does. ",
"At the same time, though, I see several important issues that need to be addressed if this paper is to be accepted. ",
"Details below. ",
"1. As much as I enjoyed reading Section 3, it is very redundant. ",
"In some cases it is good to outline a powerful and generic framework (like the authors did here with defining \"teaching\" in a very broad sense, including selecting good loss functions and hypothesis spaces) and then explain that the current work focuses on one aspect (selecting training data points). ",
"However, I do not see it being the case here. ",
"In my opinion, selecting good loss functions and hypothesis spaces are much harder problems than data teaching - except maybe when one use a pre-defined set of possible loss functions and select from it. ",
"But that is not very interesting ",
"(if you can propose new loss functions, that would be way cooler). ",
"I also do not see how to define an intuitive set of \"states\" in that case. ",
"Therefore, I think this section should be shortened. ",
"I also think that the authors should not discuss the general framework and rather focus on \"data teaching\", ",
"which is the only focus of the current paper. ",
"The abstract and introduction should also be modified accordingly to more honestly reflect the current contributions. ",
"2. The authors should do a better job at explaining the details of the state definition, especially the student model features and the combination of data and current learner model. ",
"3. There is only one definition of the reward - related to batch number when the accuracy first exceeds a threshold. ",
"Is accuracy stable, can it drop back down below the threshold in the next epoch? ",
"The accuracy on a held-out test set is not guaranteed to be monotonically increasing, right? ",
"Is this a problem in practice (it seems to happen on your curves)? ",
"What about other potential reward definitions? ",
"And what would they potentially lead to? ",
"4. Experimental results are averaged over 5 repeated runs ",
"- a bit too small in my opinion. ",
"5. Can the authors show convergence of the teacher parameter \\theta? ",
"I think it is important to see how fast the teacher model converges, too. ",
"6. In some of your experiments, every training method converges to the same accuracy after enough training (Fig.2b), while in others, not quite (Fig. 2a and 2c). ",
"Why is this the case? ",
"Does it mean that you have not run enough iterations for the baseline methods? ",
"My intuition is that if the learner algorithm is convex, then ultimately they will all get to the same accuracy level, so the task is just to get there quicker. ",
"I understand that since the learner algorithm is an NN, ",
"this is not the case ",
"- but more explanation is necessary here ",
"- does your method also reduces the empirical possibility to get stuck in local minima? ",
"7. More explanation is needed towards Fig.4c. ",
"In this case, using a teacher model trained on a harder task (CIFAR10) leads to much improved student training on a simpler task (MNIST). ",
"Why?",
"8. Although in terms of \"effective training data points\" the proposed method outperforms the other methods, ",
"in terms of time (Fig.5) the difference between it and say, NoTeach, is not that significant (especially at very high desired accuracy). ",
"More explanation needed here."
] | [
"fact",
"fact",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"non-arg",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"request",
"request",
"fact",
"request",
"request",
"fact",
"request",
"request",
"request",
"request",
"request",
"fact",
"evaluation",
"request",
"evaluation",
"fact",
"request",
"request",
"evaluation",
"fact",
"fact",
"request",
"request",
"request",
"fact",
"request",
"fact",
"evaluation",
"request"
] |
S1KIF7olf | [
"This paper presents an empirical study of whether data augmentation can be a substitute for explicit regularization of weight decay and dropout.",
"It is a well written and well organized paper.",
"However, overall I do not find the authors’ premises and conclusions to be well supported by the results and",
"would suggest further investigations.",
"In particular: a) Data augmentation is a very domain specific process and limits of augmentation are often not clear.",
"For example, in financial data or medical imaging data it is often not clear how data augmentation should be carried out and how much is too much.",
"On the other hand model regularization is domain agnostic",
"(has to be tuned for each task, but the methodology is consistent and well known).",
"Thus advocating that data augmentation can universally replace explicit regularization does not seem correct.",
"b) I find the results to be somewhat inconsistent.",
"For example, on CIFAR-10, for 100% data regularization+augmentation is better than augmentation alone for both models,",
"whereas for 80% data augmentation alone seems to be better.",
"Similarly on CIFAR-100 the WRN model shows mixed trends,",
"and this model is significantly better than the All-CNN model in performance.",
"These results also seem inconsistent with authors statement",
"“…and conclude that data augmentation alone - without any other explicit regularization techniques - can achieve the same performance to higher as regularized models…”"
] | [
"fact",
"evaluation",
"evaluation",
"request",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"evaluation",
"quote"
] |
rJAxUSLSM | [
"The paper consider a method for \"weight normalization\" of layers of a neural network. ",
"The weight matrix is maintained normalized, which helps accuracy. ",
"However, the simplest way to normalize on a fully connected layer is quadratic (adding squares of weights and taking square root).",
"The paper proposes \"FastNorm\", which is a way to implicitly maintain the normalized weight matrix using much less computation. ",
"Essentially, a normalization vector is maintained an updated separately.",
"Pros: Natural method to do weight normalization efficeintly",
"Cons: A very natural and simple solution that is fairly obvious.",
"Limited experiments"
] | [
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation"
] |
ry2OdYCeM | [
"Paper presents an interesting attention mechanism for fine-grained image classification.",
"Introduction states that the method is simple and easy to understand.",
"However, the presentation of the method is bit harder to follow.",
"It is not clear to me if the attention modules are applied over all pooling layers.",
"How they are combined?",
"Why use cross -correlation as the regulariser?",
"Why not much stronger constraint such as orthogonality over elements of M in equation 1?",
"What is the impact of this regularisation?",
"Why use soft-max in equation 1?",
"One may use a Sigmoid as well?",
"Is it better to use soft-max?",
"Equation 9 is not entirely clear to me.",
"Undefined notations.",
"In Table 2, why stop from AD= 2 and AW=2?",
"What is the performance of AD=1, AW=1 with G?",
"Why not perform this experiment over all 5 datasets?",
"Is this performances, dataset specific?",
"The method is compared against 5 datasets.",
"Obtained results are quite good."
] | [
"evaluation",
"fact",
"evaluation",
"evaluation",
"request",
"request",
"request",
"request",
"request",
"non-arg",
"non-arg",
"evaluation",
"fact",
"request",
"request",
"request",
"request",
"fact",
"evaluation"
] |
HJ2pirpxG | [
"This paper considers the problem of improving sequence generation by learning better metrics. ",
"Specifically, it focuses on addressing the exposure bias problem, where traditional methods such as SeqGAN uses GAN framework and reinforcement learning. ",
"Different from these work, this paper does not use GAN framework. ",
"Instead, it proposed an expert-based reward function training, which trains the reward function (the discriminator) from data that are generated by randomly modifying parts of the expert trajectories. ",
"Furthermore, it also introduces partial reward function that measures the quality of the subsequences of different lengths in the generated data. ",
"This is similar to the idea of hierarchical RL, which divide the problem into potential subtasks, which could alleviate the difficulty of reinforcement learning from sparse rewards. ",
"The idea of the paper is novel. ",
"However, there are a few points to be clarified.",
"In Section 3.2 and in (4) and (5), the authors explains how the action value Q_{D_i} is modeled and estimated for the partial reward function D_i of length L_{D_i}. ",
"But the authors do not explain how the rewards (or action value functions) of different lengths are aggregated together to update the model using policy gradient. ",
"Is it a simple sum of all of them?",
"It is not clear why the future subsequences that do not contain y_{t+1} are ignored for estimating the action value function Q in (4) and (5). ",
"The authors stated that it is for reducing the computation complexity. ",
"But it is not clear why specifically dropping the sequences that do not contain y_{t+1}. ",
"Please clarify more on this point."
] | [
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"non-arg",
"evaluation",
"fact",
"evaluation",
"request"
] |
HygXOMDxf | [
"The authors propose an approach to dynamically generating filters in a CNN based on the input image. ",
"The filters are generated as linear combinations of a basis set of filters, based on features extracted by an auto-encoder. ",
"The authors test the approach on recognition tasks on three datasets: MNIST, MTFL (facial landmarks) and CIFAR10, and show a small improvement over baselines without dynamic filters.",
"Pros: 1) I have not seen this exact approach proposed before.",
"2) There method is evaluated on three datasets and two tasks: classification and facial landmark detection.",
"Cons: 1) The authors are not the first to propose dynamically generating filters, ",
"and they clearly mention that the work of De Brabandere et al. is closely related. ",
"Yet, there is no comparison to other methods for dynamic weight generation. ",
"2) Related to that, there is no ablation study, ",
"so it is unclear if the authors’ contributions are useful. ",
"I appreciate the analysis in Tables 1 and 2, ",
"but this is not sufficient. ",
"Why the need for the autoencoder - why can’t the whole network be trained end-to-end on the goal task? ",
"Why generate filters as linear combination - is this just for computational reasons, or also accuracy? ",
"This should be analyzed empirically.",
"3) The experiments are somewhat substandard:",
"- On MNIST the authors use a tiny poorly-performance network, ",
"and it is no surprise that one can beat it with a bigger dynamic filter network.",
"- The MTFL experiments look most convincing ",
"(although this might be because I am not familiar with SoTA on the dataset), ",
"but still there is no control for the number of parameters, ",
"and the performance improvements are not huge",
"- On CIFAR10 - there is a marginal improvement in performance, ",
"which, as the authors admit, can also be reached by using a deeper model. ",
"The baseline models are far from SoTA ",
"- the authors should look at more modern architecture such as AllCNN (not particularly new or good, but very simple), ResNet, wide ResNet, DenseNet, etc.",
"As a comment, I don’t think classification is a good task for showcasing such an architecture ",
"- classification is already working extremely well. ",
"Many other tasks - for instance, detection, tracking, few-shot learning - seem much more promising.",
"To conclude, the authors propose a new approach to learning convolutional networks with dynamic input-conditioned filters. ",
"Unfortunately, the authors fail to demonstrate the value of the proposed method. ",
"I therefore recommend rejection."
] | [
"fact",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"request",
"request",
"request",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"request",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation"
] |
H15qgiFgf | [
"This work identifies a mistake in the existing proof of convergence of Adam, ",
"which is among the most popular optimization methods in deep learning. ",
"Moreover, it gives a simple 1-dimensional counterexample with linear losses on which Adam does not converge. ",
"The same issue also affects RMSprop, ",
"which may be viewed as a special case of Adam without momentum. ",
"The problem with Adam is that the \"learning rate\" matrices V_t^{1/2}/alpha_t are not monotonically decreasing. ",
"A new method, called AMSGrad is therefore proposed, which modifies Adam by forcing these matrices to be decreasing. ",
"It is then shown that AMSGrad does satisfy essentially the same convergence bound as the one previously claimed for Adam. ",
"Experiments and simulations are provided that support the theoretical analysis.",
"Apart from some issues with the technical presentation (see below), ",
"the paper is well-written.",
"Given the popularity of Adam, I consider this paper to make a very interesting observation. ",
"I further believe all issues with the technical presentation can be readily addressed.",
"Issues with Technical Presentation:- All theorems should explicitly state the conditions they require instead of referring to \"all the conditions in (Kingma & Ba, 2015)\".",
"- Theorem 2 is a repetition of Theorem 1 (except for additional conditions).",
"- The proof of Theorem 3 assumes there are no projections, ",
"so this should be stated as part of its conditions. ",
"(The claim in footnote 2 that they can be handled seems highly plausible, ",
"but you should be up front about the limitations of your results.)",
"- The regret bound Theorem 4 establishes convergence of the optimization method, ",
"so it plays the role of a sanity check. ",
"However, it is strictly worse than the regret bound O(sqrt{T}) for online gradient descent [Zinkevich,2003], ",
"so it cannot explain why the proposed AMSgrad method might be adaptive. ",
"(The method may indeed be adaptive in some sense; ",
"I am just saying the *bound* does not express that.",
"This is also not a criticism of the current paper; ",
"the same remark also applies to the previously claimed regret bound for Adam.)",
"- The discussion following Corollary 1 suggests that sum_i hat{v}_{T,i}^{1/2} might be much smaller than d G_infty. ",
"This is true, ",
"but we should always expect it to be at least a constant, ",
"because hat{v}_{t,i} is monotonically increasing by definition of the algorithm, ",
"so the bound does not get better than O(sqrt(T)).",
"It is also suggested that sum_i ||g_{1:T,i}|| = sqrt{sum_{t=1}^T g_{t,i}^2} might be much smaller than dG_infty, ",
"but this is very unlikely, ",
"because this term will typically grow like O(sqrt{T}), unless the data are extremely sparse, ",
"so we should at least expect some dependence on T.",
"- In the proof of Theorem 1, the initial point is taken to be x_1 = 1,",
"which is perfectly fine, ",
"but it is not \"without loss of generality\", as claimed. ",
"This should be stated in the statement of the Theorem.",
"- The proof of Theorem 6 in appendix B only covers epsilon=1. ",
"If it is \"easy to show\" that the same construction also works for other epsilon, as claimed, then please provide the proof for general epsilon.",
"Other remarks:- Theoretically, nonconvergence of Adam seems a severe problem. ",
"Can you speculate on why this issue has not prevented its widespread adoption?",
"Which factors might mitigate the issue in practice?",
"- Please define g_t \\circ g_t and g_{1:T,i}",
"- I would recommend sticking with standard linear algebra notation for the sqrt and the inverse of a matrix and simply using A^{-1} and A^{1/2} instead of 1/A and sqrt{A}.",
"- In theorems 1,2,3, I would recommend stating the dimension (d=1) of your counterexamples, ",
"which makes them very nice!",
"Minor issues:- Check accent on Nicol\\`o Cesa-Bianchi in bibliography.",
"- Near the end of the proof of Theorem 6: I believe you mean Adam suffers a \"regret\" instead of a \"loss\" of at least 2C-4.",
"Also 2C-4=2C-4 is trivial in the second but last display."
] | [
"fact",
"evaluation",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"request",
"fact",
"fact",
"request",
"evaluation",
"request",
"fact",
"evaluation",
"fact",
"fact",
"evaluation",
"evaluation",
"non-arg",
"non-arg",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"fact",
"request",
"fact",
"request",
"evaluation",
"request",
"request",
"request",
"request",
"request",
"evaluation",
"request",
"request",
"evaluation"
] |
ryoWUP5lz | [
"This work proposes an approach for transcription factor binding site prediction using a multi-label classification formulation. ",
"It is a very interesting problem ",
"and application and the approach is interesting. ",
"Novelty: The method is quite similar to matching networks (Vinyals, 2016) with a few changes in the matching approach. ",
"As such, in order to establish its broader applicability there should be additional evaluation on other benchmark datasets. ",
"The MNIST performance comparison is inadequate ",
"and there are other papers that do better on it. ",
"They should clearly list what the contributions are w.r.t to the work by Vinyals et al 2016.",
"They should also cite works that learn embeddings in a multi-label setting such as StarSpace.",
"Impact: In its current form the paper seems to be most relevant to the computational biology / TFBS community. ",
"However, there is no comparison to the exact networks used in the prior works DeepBind/DeepSea/DanQ/Basset/DeepLift or bidirectional LSTMs. ",
"Further there is no comparison to existing one-shot learning techniques either. ",
"This greatly limits the impact of the work.",
"For biological impact, a comparison to any of the motif learning approaches that are popular in the biology/comp-bio community will help (for instance, HOMER, FIMO).",
"Cons: The authors claim they can learn TF-TF interactions and it is one of the main biological contributions, ",
"but there is no evidence of why ",
"(beyond very preliminary evaluation using the Trrust database). ",
"Their examples are 200-bp long which does not mean that all TFs binding in that window are involved in cooperative binding. ",
"The prototype loss is too simplistic to capture co-binding tendencies ",
"and the combinationLSTM is not well motivated. ",
"One interesting source of information they could tap into for TF-TF interactions is CAP-SELEX (Jolma et al, Nature 2015).",
"One of the main drawbacks is the lack of interpretability of their model where approaches like DanQ/DeepLift etc benefit. ",
"The PWM-like filters in some of the prior works help understand what type of sequence properties contribute to binding events. ",
"Can their model lead to an understanding of this sort?",
"Evaluation: The empirical evaluation itself is not very strong ",
"as there are only modest improvements over simple baselines. ",
"Further there are no error-bars etc to indicate the variance in their performance numbers.",
"It will be useful to have a TF-level performance split-up to get an idea of which TFs benefit most.",
"Clarity: The paper can benefit from more clarity in the technical aspects. ",
"It is hard to follow for anyone not already familiar with matching networks. ",
"The objective function, parameters need to be clearly introduced in one place. ",
"For instance, what is y_i in their multi-label framework?",
"Various choices are not well motivated; for instance cosine similarity, the value of hyperparameter epsilon.",
"The prototype vectors are not motif-like at all -- ",
"can the authors motivate this aspect better?"
] | [
"fact",
"evaluation",
"evaluation",
"evaluation",
"request",
"evaluation",
"fact",
"request",
"request",
"evaluation",
"fact",
"fact",
"evaluation",
"request",
"fact",
"fact",
"evaluation",
"fact",
"evaluation",
"evaluation",
"request",
"evaluation",
"fact",
"non-arg",
"evaluation",
"evaluation",
"fact",
"request",
"request",
"evaluation",
"request",
"request",
"evaluation",
"evaluation",
"request"
] |
S1ck4rYxM | [
"[Overview]In this paper, the authors proposed a novel model called MemoryGAN, which integrates memory network with GAN.",
"As claimed by the authors, MemoryGAN is aimed at addressing two problems of GAN training:",
"1) difficult to model the structural discontinuity between disparate classes in the latent space;",
"2) catastrophic forgetting problem during the training of discriminator about the past synthesized samples by the generator.",
"It exploits the life-long memory network and adapts it to GAN.",
"It consists of two parts, discriminative memory network (DMN) and Memory Conditional Generative Network (MCGN).",
"DMN is used for discriminating input samples by integrating the memory learnt in the memory network, and MCGN is used for generating images based on random vector and the sampled memory from the memory network.",
"In the experiments, the authors evaluated memoryGAN on three datasets, CIFAR-10, affine-MNIST and Fashion-MNIST, and demonstrated the superiority to previous models.",
"Through ablation study, the authors further showed the effects of separate components in memoryGAN.",
"[Strengths] 1. This paper is well-written.",
"All modules in the proposed model and the experiments were explained clearly.",
"I enjoyed much to read the paper.",
"2. The paper presents a novel method called MemoryGAN for GAN training.",
"To address the two infamous problems mentioned in the paper, the authors proposed to integrate a memory network into GAN.",
"Through memory network, MemoryGAN can explicitly learn the data distribution of real images and fake images.",
"I think this is a very promising and meaningful extension to the original GAN.",
"3. With MemoryGAN, the authors achieved best Inception Score on CIFAR-10.",
"By ablation study, the authors demonstrated each part of the model helps to improve the final performance.",
"[Comments] My comments are mainly about the experiment part:",
"1. In Table 2, the authors show the Inception Score of images generated by DCGAN at the last row.",
"On CIFAR-10, it is ~5.35.",
"As the authors mentioned, removing EM, MCGCN and Memory will result in a conventional DCGAN.",
"However, as far as I know, DCGAN could achieve > 6.5 Inception Score in general.",
"I am wondering what makes such a big difference between the reported numbers in this paper and other papers?",
"2. In the experiments, the authors set N = 16,384, and M = 512, and z is with dimension 16.",
"I did not understand why the memory size is such large.",
"Take CIFAR-10 as the example, its training set contains 50k images.",
"Using such a large memory size, each memory slot will merely count for several samples.",
"Is a large memory size necessary to make MemoryGAN work?",
"If not, the authors should also show ablated study on the effect of different memory size;",
"If it is true, please explain why is that.",
"Also, the authors should mention the training time compared with DCGAN.",
"Updating memory with such a large size seems very time-consuming.",
"3. Still on the memory size in this model.",
"I am curious about the results if the size is decreased to the same or comparable number of image categories in the training set.",
"As the author claimed, if the memory network could learn to cluster training data into different category, we should be able to see some interesting results by sampling the keys and generate categoric images.",
"4. The paper should be compared with InfoGAN (Chen et al. 2016),",
"and the authors should explain the differences between two models in the related work.",
"Similar to MemoryGAN, InfoGAN also did not need any data annotations, but could learn the latent code flexibly.",
"[Summary]",
"This paper proposed a new model called MemoryGAN for image generation.",
"It combined memory network with GAN, and achieved state-of-art performance on CIFAR-10.",
"The arguments that MemoryGAN could solve the two infamous problem make sense.",
"As I mentioned above, I did not understand why the authors used such large memory size.",
"More explanations and experiments should be conducted to justify this setting.",
"Overall, I think MemoryGAN opened a new direction of GAN and worth to further explore."
] | [
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"non-arg",
"fact",
"fact",
"fact",
"fact",
"non-arg",
"fact",
"evaluation",
"fact",
"fact",
"non-arg",
"request",
"request",
"request",
"fact",
"non-arg",
"non-arg",
"fact",
"request",
"request",
"fact",
"non-arg",
"fact",
"fact",
"evaluation",
"non-arg",
"request",
"evaluation"
] |
HJ1MEAYxG | [
"The authors are motivated by two problems: Inputting non-Euclidean data (such as graphs) into deep CNNs, and analyzing optimization properties of deep networks.",
"In particular, they look at the problem of maze testing, where, given a grid of black and white pixels, the goal is to answer whether there is a path from a designated starting point to an ending point.",
"They choose to analyze mazes because they have many nice statistical properties from percolation theory.",
"For one, the problem is solvable with breadth first search in O(L^2) time, for an L x L maze.",
"They show that a CNN can essentially encode a BFS,",
"so theoretically a CNN should be able to solve the problem.",
"Their architecture is a deep feedforward network where each layer takes as input two images: one corresponding to the original maze (a skip connection), and the output of the previous layer.",
"Layers alternate between convolutional and sigmoidal.",
"The authors discuss how this architecture can solve the problem exactly.",
"The pictorial explanation for how the CNN can mimic BFS is interesting",
"but I got a little lost in the 3 cases on page 4.",
"For example, what is r?",
"And what is the relation of the black/white and orange squares?",
"I thought this could use a little more clarity.",
"Though experiments, they show that there are two kinds of minima, depending on whether we allow negative initializations in the convolution kernels.",
"When positive initializations are enforced, the network can more or less mimic the BFS behavior, but never when initializations can be negative.",
"They offer a rigorous analysis into the behavior of optimization in each of these cases, concluding that there is an essential singularity in the cost function around the exact solution,",
"yet learning succumbs to poor optima due to poor initial predictions in training.",
"I thought this was an impressive paper that looked at theoretical properties of CNNs.",
"The problem was very well-motivated,",
"and the analysis was sharp and offered interesting insights into the problem of maze solving.",
"What I thought was especially interesting is how their analysis can be extended to other graph problems;",
"while their analysis was specific to the problem of maze solving, they offer an approach -- e.g. that of finding \"bugs\" when dealing with graph objects -- that can extend to other problems.",
"I would be excited to see similar analysis of other toy problems involving graphs.",
"One complaint I had was inconsistent clarity:",
"while a lot was well-motivated and straightforward to understand,",
"I got lost in some of the details (as an example, the figure on page 4 did not initially make much sense to me).",
"Also, in the experiments, the authors mention multiple attempt with the same settings --",
"are these experiments differentiated only by their initialization?",
"Finally, there were various typos throughout",
"(one example is \"neglect minimua\" on page 2 should be \"neglect minima\").",
"Pros: Rigorous analysis,",
"well motivated problem,",
"generalizable results to deep learning theory",
"Cons: Clarity"
] | [
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"request",
"request",
"request",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"request",
"evaluation",
"evaluation",
"evaluation",
"fact",
"request",
"fact",
"request",
"evaluation",
"evaluation",
"fact",
"evaluation"
] |
SJQVdQ5lG | [
"This paper describes an extension to the recently introduced Transformer networks which shows better convergence properties and also improves results on standard machine translation benchmarks. ",
"This is a great paper ",
"-- it introduces a relatively simple extension of Transformer networks which only adds very few parameters and speeds up convergence and achieves better results. ",
"It would have been good to also add a motivation for doing this ",
"(for example, this idea can be interpreted as having a variable number of attention heads which can be blended in and out with a single learned parameter, hence making it easier to use the parameters where they are needed). ",
"Also, it would be interesting to see how important the concatenation weight and the addition weight are relative to each other -- ",
"do you possibly get the same results even without the concatenation weight? ",
"A suggested improvement: Please check the references in the introduction and see if you can find earlier ones -- ",
"for example, language modeling with RNNs has been done for a very long time, not just since 2017 which are the ones you list; ",
"similar for speech recognition etc. (which probably has been done since 1993!)."
] | [
"fact",
"evaluation",
"fact",
"request",
"fact",
"request",
"request",
"request",
"fact",
"fact"
] |
BJQD_I_eM | [
"The paper proposes an analysis on different adaptive regularization techniques for deep transfer learning. ",
"Specifically it focuses on the use of an L2-SP condition that constraints the new parameters to be close to the ones previously learned when solving a source task. ",
"+ The paper is easy to read and well organized",
"+ The advantage of the proposed regularization against the more standard L2 regularization is clearly visible from the experiments",
"- The idea per se is not new: ",
"there is a list of shallow learning methods for transfer learning based on the same L2 regularization choice",
"[Cross-Domain Video Concept Detection using Adaptive SVMs, ACM Multimedia 2007]",
"[Learning categories from few examples with multi model knowledge transfer, PAMI 2014]",
"[From n to n+ 1: Multiclass transfer incremental learning, CVPR 2013]",
"I believe this literature should be discussed in the related work section",
"- It is true that the L2-SP-Fisher regularization was designed for life-long learning cases with a fixed task, ",
"however, this solution seems to work quite well in the proposed experimental settings. ",
"From my understanding L2-SP-Fisher can be considered the best competitor of L2-SP ",
"so I think the paper should dedicate more space to the analysis of their difference and similarities both from the theoretical and experimental point of view. ",
"For instance: -- adding the L2-SP-Fisher results in table 2",
"-- repeating the experiments of figure 2 and figure 3 with L2-SP-Fisher"
] | [
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"reference",
"reference",
"reference",
"request",
"fact",
"evaluation",
"evaluation",
"request",
"request",
"request"
] |
SyFscqngM | [
"This paper essentially uses CycleGANs for Domain Adaptation.",
"My biggest concern is that it doesn't adequately compare to similar papers that perform adaptation at the pixel level",
"(eg. Shrivastava et al-'Learning from Simulated and Unsupervised Images through Adversarial Training'",
"and Bousmalis et al - 'Unsupervised Pixel-level Domain Adaptation with GANs',",
"two similar papers published in CVPR 2017 -the first one was even a best paper- and available on arXiv since December 2016-before CycleGANs).",
"I believe the authors should have at least done an ablation study to see if they cycle-consistency loss truly makes a difference on top of these works-that would be the biggest selling point of this paper.",
"The experimental section had many experiments, which is great.",
"However I think for semantic segmentation it would be very interesting to see whether using the adapted synthetic GTA5 samples would improve the SOTA on Cityscapes.",
"It wouldn't be unsupervised domain adaptation,",
"but it would be very impactful.",
"Finally I'm not sure the oracle (train on target) mIoU on Table 2 is SOTA,",
"and I believe the proposed model's performance is really far from SOTA.",
"Pros: * CycleGANs for domain adaptation!",
"Great idea!",
"* I really like the work on semantic segmentation,",
"I think this is a very important direction",
"Cons: * I don't think Domain separation networks is a pixel-level transformation-",
"that's a feature-level transformation,",
"you probably mean to use Bousmalis et al. 2017.",
"Also Shrivastava et al is missing from the image-level papers.",
"* the authors claim that Bousmalis et al, Liu & Tuzel and Shrivastava et al ahve only been shown to work for small image sizes.",
"There's a recent work by Bousmalis et al. (Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping) that shows these methods working well (w/o cycle-consistency) for settings similar to semantic segmentation at a relatively high resolution.",
"Also it was mentioned that these methods do not necessarily preserve content, when pixel-da explicitly accounts for that with a task loss (identical to the semantic loss used in this submission)",
"* The authors talk about the content similarity loss on the foreground in Bousmalis et al. 2017,",
"but they could compare to this method w/o using the content similarity or using a different content similarity tailored to the semantic segmentation tasks, which would be trivial.",
"* Math seems wrong in (4) and (6).",
"(4) should be probably have a minus instead of a plus.",
"(6) has an argmin of a min,",
"not sure what is being optimized here.",
"In fact, I'm not sure if eg you use the gradients of f_T for training the generators?",
"* The authors mention that the pixel-da approach cross validates with some labeled data.",
"Although I agree that is not an ideal validation,",
"I'm not sure if it's equivalent or not the authors' validation setting,",
"as they don't describe what that is.",
"* The authors present the semantic loss as novel,",
"however this is the task loss proposed by the pixel-da paper.",
"* I didn't understand what pixel-only and feat-only meant in tables 2, 3, 4.",
"I couldn't find an explanation in captions or in text"
] | [
"fact",
"evaluation",
"reference",
"reference",
"fact",
"request",
"evaluation",
"request",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"evaluation"
] |
B1BFRS7ZM | [
"There may be some interesting ideas here, ",
"but I think in many places the mathematical\\ndescription is very confusing and/or flawed.",
"To give some examples:\\n\\n* Just before section 2.1.1, P(T) = \\\\prod_{p \\\\in Path(T)} ... : it's not clear \\nat all clear that this defines a valid distribution over trees.",
"There is an\\nimplicit order over the paths in Path(T)",
"that is simply not defined",
"(otherwise\\nhow for x^p could we decide which symbols x^1 ... x^{p-1} to condition\\nupon?)\\n\\n",
"\\\"We can write S -> O | v | \\\\epsilon...\\\" ",
"with S, O and v defined as sets.\\n",
"This is certainly non-standard notation,",
"more explanation is needed.\\n\\n",
"\\\"The observation is generated by the sequence of left most \\nproduction rules\\\".",
"This appears to be related to the idea of left-most\\nderivations in context-free grammars. ",
"But no discussion is given, ",
"and\\nthe writing is again vague/imprecise.\\n\\n",
"\\\"Although the above grammar is not, in general, context free\\\"",
"- I'm not\\nsure what is being referred to here. ",
"Are the authors referring to the underlying grammar,\\nor the lack of independence assumptions in the model? ",
"The grammar\\nis clearly context-free; ",
"the lack of independence assumptions is a separate\\nissue.\\n\\n",
"\\\"In a probabilistic context-free grammar (PCFG), all production rules are\\nindependent\\\": ",
"this is not an accurate statement, ",
"it's not clear what is meant\\nby production rules being independent. ",
"More accurate would be to say that\\nthe choice of rule is conditionally independent of all other information \\nearlier in the derivation, once the non-terminal being expanded is\\nconditioned upon."
] | [
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"evaluation",
"quote",
"fact",
"evaluation",
"request",
"quote",
"evaluation",
"fact",
"evaluation",
"quote",
"non-arg",
"non-arg",
"evaluation",
"fact",
"quote",
"evaluation",
"evaluation",
"request"
] |
B1MeHT3rG | [
"This paper proposes a model for generating pop music melodies with a recurrent neural network conditioned on chord and part (song section) information.",
"They train their model on a small dataset and compare it to a few existing models in a human evaluation.",
"I think this paper has many issues, which I describe below.",
"As a broad overview, the use of what the authors call \"word\" representations of notes is not novel (appearing first in BachBot and the PerformanceRNN);",
"I suspect the model may be outputting sequences from the training set;",
"and the dataset is heavily constrained in a way that will make producing pleasing melodies easily but heavily limits the possible outputs of the model.",
"The paper is also missing important references and is confusingly laid out (e.g. introducing a GAN model in a few paragraphs in the experiments).",
"Specific criticism: - \"often producing works that are indistinguishable from human works (Goodfellow et al. (2014); Radford et al. (2016); Potash et al. (2015)).\" I would definitely not say that any of the cited papers produce anything that could be confused as \"real\";",
"e.g. the early GAN papers you cite were not even close (maybe somewhat close for images of bedrooms, which is a limited domain and certainly cannot be considered a \"work\").",
"- There are many unsubstantiated claims in the second paragraph.",
"E.g. \"there is yet a certain aspect about it that makes it sound like (or not sound like) human-written music.\"",
"What is it?",
"What evidence do we have that this is true?",
"\" notes in the chorus part generally tend to be more high-pitched\"",
"Really?",
"Where was this measured?",
"\"music is not merely a series of notes, but entails an overall structure of its own\"",
"Sure, but natural images are not merely a series of pixels either, and they certainly have structure, but we are making lots of good progress modeling them. Etc.",
"- Your related work section is lacking.",
"For example, Eck & Schmidhuber in 2002 proposed using LSTMs for music composition, which is not much later than works from \"the early days\" despite having not \"employed rule or template based approach\".",
"Your note/time offset/duration representation is very similar to that of BachBot (by Liang) and Magenta's PerformanceRNN.",
"GANs were also previously applied to piano roll generation,",
"see MuseGAN (Dong et al), MidiNet (Yang et al), etc.",
"Your critique of Jaques et al. is misleading;",
"\"they defined a number of music-theory based rules to set up the reward function\"",
"is the whole point - this is an optional step which improves results, and there is no reason a priori to think that hand-designing regularizers is better than hand-designing RL objectives.",
"- The analogy to image captioning models is interesting,",
"but this type of image captioning model is not only model which is effectively a conditional language model - any sequence-to-sequence model can be looked at this way.",
"I don't think that these image captioning models are even the most commonly known example,",
"so I'm not sure why the proposed approach is being proposed in analogy to image captioning.",
"- I don't think you need to reproduce the LSTM equations in your text, they are well-known.",
"- You should define early on what you mean by \"part\",",
"I think you mean the song's section (verse, chorus, etc)",
"but I have also heard this used to refer to the different instruments in a song.",
"I don't think you should expect this term to be known outside of musical communities (e.g. the ICLR community).",
"- It seems simpler (and more in keeping with the current zeigeist, e.g. the image captioning models you refer to) to replace your HMM with a model that",
"- The regularization is interesting,",
"but a simpler way to enforce this constraint would be to just only allow the model to produce notes within that predefined range.",
"Since you effectively constrain it to an octave,",
"it would be simple to wrap all notes in your training data into this octave.",
"This baseline is probably worth comparing to",
"since it is substantially simpler than your regularizer.",
"- You write that the softmax cost should have \\frac{\\partial E}{\\partial p_i} \\mu added to it for the regularizer.",
"First, you don't define E anywhere, you only introduce it in its derivative",
"(and of course you can't \"define\" the derivative of an expression, it's an analytically computed quantity).",
"Second, are you sure you mean that the partial derivative should be added, and not the cost C itself?",
"- Your results showing that human raters preferred your models are impressive,",
"but you have made the task easier for yourself in various ways:",
"1) Constraining the training data to pop music",
"2) Making all of the training data in a single (major) key",
"3) Effectively limiting the melody range to within a single octave.",
"- It sounds very much like your model is repeating bars, e.g. it generates a melody of length N bars, then repeats this melody.",
"Is this something you hard-coded into the model?",
"It would be very surprising if it learned to exhibit this behavior on its own.",
"If you hard-coded it into the model, I would expect it to sound better to human raters,",
"but this is a strong heuristic.",
"- I'd suggest you provide example melodies from your model in isolation (more like the \"varying number of bars\" examples) rather than as part of a full music mix",
"- this makes it easier to judge the quality of the model's output.",
"- The GAN experiments are interesting but come as a big surprise and are largely orthogonal to the other model;",
"why not include this in your model description section?",
"The model and training details are not adequately described",
"and I don't think it adds much to the paper to include it.",
"Furthermore it's quite similar to the MidiNet and MuseGAN, so maybe it should be introduced as a baseline instead.",
"- How did you order the notes for chords?",
"If three notes occur simultaneously (in a chord), there's no a priory correct way to list them sequentially (two with an interval of length zero between notes).",
"- \"Generated instruments sound fairly in tune individually, confirming that our proposed model is applicable to other instruments as well\" Assuming you are still using C-major-only melodies, it's not surprising that the generations sound in tune!",
"- It is not surprising that your model ends up overfitting",
"because your dataset is very small,",
"your model is very powerful,",
"and your regularizer does not really limit the model's capacity much.",
"I suspect that your model is overfitting even earlier than you think.",
"You should check that none of the sequences output by your model appear in the training set.",
"You could easily compute n-gram overlap of the generated sequences vs. the training set.",
"At what point did you stop training before running the human evaluation?",
"If you let your model overfit, then of course it will generate very human-sounding melodies,",
"but this is not a terribly interesting generative model."
] | [
"fact",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"quote",
"evaluation",
"evaluation",
"quote",
"fact",
"fact",
"quote",
"fact",
"evaluation",
"fact",
"evaluation",
"fact",
"reference",
"evaluation",
"quote",
"fact",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"request",
"evaluation",
"non-arg",
"evaluation",
"request",
"evaluation",
"evaluation",
"fact",
"evaluation",
"request",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"non-arg",
"evaluation",
"evaluation",
"evaluation",
"request",
"evaluation",
"evaluation",
"request",
"fact",
"evaluation",
"request",
"non-arg",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"request",
"request",
"non-arg",
"fact",
"evaluation"
] |
Hk96V1clf | [
"This paper generates adversarial examples using the fast gradient sign (FGS) and iterated fast gradient sign (IFGS) methods, but replacing the gradient computation with finite differences or another gradient approximation method. ",
"Since finite differences is expensive in high dimensions, ",
"the authors propose using directional derivatives based on random feature groupings or PCA. ",
"This paper would be much stronger if it surveyed a wider variety of gradient-free optimization methods. ",
"Notably, there's two important black-box optimization baselines that were not included: ",
"simultaneous perturbation stochastic approximation ( https://en.wikipedia.org/wiki/Simultaneous_perturbation_stochastic_approximation), which avoids computing the gradient explicitly, and evolutionary strategies ( https://blog.openai.com/evolution-strategies/ ), a similar method that uses several random directions to estimate a better descent direction.",
"The gradient approximation methods proposed in this paper may or may not be better than SPSA or ES. ",
"Without a direct comparison, it's hard to know. ",
"Thus, the main contribution of this paper is in demonstrating that gradient approximation methods are sufficient for generating good adversarial attacks and applying those attacks to Clarifai models. ",
"That's interesting and useful to know, but is still a relatively small contribution, making this paper borderline. ",
"I lean towards rejection, ",
"since the paper proposes new methods without comparing to or even mentioning well-known alternatives."
] | [
"fact",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact"
] |
Sy4mWsOeG | [
"Many black-box optimization problems are \"multi-fidelity\", in which it is possible to acquire data with different levels of cost and associated uncertainty.",
"The training of machine learning models is a common example, in which more data and/or more training may lead to more precise measurements of the quality of a hyperparameter configuration.",
"This has previously been referred to as a special case of \"multi-task\" Bayesian optimization, in which the tasks can be constructed to reflect different fidelities.",
"The present paper examines this construction with three twists: using the knowledge gradient acquisition function, using batched function evaluations, and incorporating derivative observations.",
"Broadly speaking, the idea is to allow fidelity to be represented as a point in a hypercube and then include this hypercube as a covariate in the Gaussian process.",
"The knowledge gradient acquisition function then becomes \"knowledge gradient per unit cost\" the KG equivalent to the \"expected improvement per unit cost\" discussed in Snoek et al (2012),",
"although that paper did not consider treating fidelity separately.",
"I don't understand the claim that this is \"the first multi-fidelity algorithm that can leverage gradients\".",
"Can't any Gaussian process model use gradient observations trivially, as discussed in the Rasmussen and Williams book?",
"Why can't any EI or entropy search method also use gradient observations?",
"This doesn't usually come up in hyperparameter optimization,",
"but it seems like a grandiose claim.",
"Similarly, although I don't know of a paper that explicitly does \"A + B\" for multi-fidelity BO and parallel BO,",
"it is an incremental contribution to combine them, not least because no other parallel BO methods get evaluated as baselines.",
"Figure 1 does not make sense to me.",
"How can the batched algorithm outperform the sequential algorithm on total cost?",
"The sequential cfKG algorithm should always be able to make better decisions with its remaining budget than 8-cfKG.",
"Is the answer that \"cost\" here means \"wall-clock time when parallelism is available\"?",
"If that's the case, then it is necessary to include plots of parallelized EI, entropy search, and KG.",
"The same is true for Figure 2; other parallel BO algorithms need to appear."
] | [
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"non-arg",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"non-arg",
"request",
"request"
] |
rJGK3urgz | [
"In this paper, the authors trains a large number of MNIST classifier networks with differing attributes (batch-size, activation function, no. layers etc.) ",
"and then utilises the inputs and outputs of these networks to predict said attributes successfully. ",
"They then show that they are able to use the methods developed to predict the family of Imagenet-trained networks and use this information to improve adversarial attack.",
"I enjoyed reading this paper. ",
"It is a very interesting set up, and a novel idea.",
"A few comments:The paper is easy to read, and largely written well. ",
"The article is missing from the nouns quite often though ",
"so this is something that should be amended. ",
"There are a few spelling slip ups ",
"(\"to a certain extend\" --> \"to a certain extent\", ",
"\"as will see\" --> \"as we will see\")",
"It appears that the output for kennen-o is a discrete probability vector for each attribute, where each entry corresponds to a possibility ",
"(for example, for \"batch-size\" it is a length 3 vector where the first entry corresponds to 64, the second 128, and the third 256). ",
"What happens if you instead treat it as a regression task, would it then be able to hint at intermediates (a batch size of 96) or extremes (say, 512).",
"A flaw of this paper is that kennen-i and io appear to require gradients from the network being probed (you do mention this in passing), which realistically you would never have access to. ",
"(Please do correct me if I have misunderstood this)",
"It would be helpful if Section 4 had a paragraph as to your thoughts regarding why certain attributes are easier/harder to predict. ",
"Also, the caption for Table 2 could contain more information regarding the network outputs.",
"You have jumped from predicting 12 attributes on MNIST to 1 attribute on Imagenet. ",
"It could be beneficial to do an intermediate experiment (a handful of attributes on a middling task).",
"I think this paper should be accepted ",
"as it is interesting and novel.",
"Pros ------ - Interesting idea",
"- Reads well",
"- Fairly good experimental results",
"Cons ------ - kennen-i seems like it couldn't be realistically deployed",
"- lack of an intermediate difficulty task"
] | [
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"request",
"fact",
"request",
"request",
"fact",
"fact",
"request",
"fact",
"non-arg",
"request",
"request",
"fact",
"request",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact"
] |
HJmKXVcgz | [
"This paper proposes a ranking-based similarity metric for distributional semantic models. ",
"The main idea is to learn \"baseline\" word embeddings, retrofitting those and applying localized centering, to then calculate similarity using a measure called \"Ranking-based Exponential Similarity Measure\" (RESM), which is based on the recently proposed APSyn measure.",
"I think the work has several important issues:",
"1. The work is very light on references. ",
"There is a lot of previous work on evaluating similarity in word embeddings (e.g. Hill et al, a lot of the papers in RepEval workshops, etc.); specialization for similarity of word embeddings (e.g. Kiela et al., Mrksic et al., and many others); multi-sense embeddings (e.g. from Navigli's group); and the hubness problem (e.g. Dinu et al.). ",
"For the localized centering approach, Hara et al.'s introduced that method. ",
"None of this work is cited, which I find inexcusable.",
"2. The evaluation is limited, in that the standard evaluations (e.g. SimLex would be a good one to add, as well as many others, please refer to the literature) are not used and there is no comparison to previous work. ",
"The results are also presented in a confusing way, ",
"with the current state of the art results separate from the main results of the paper. ",
"It is unclear what exactly helps, in which case, and why.",
"3. There are technical issues with what is presented, with some seemingly factual errors. ",
"For example, \"In this case we could apply the inversion, however it is much more convinient [sic] to take the negative of distance. Number 1 in the equation stands for the normalizing, hence the similarity is defined as follows\" ",
"- the 1 does not stand for normalizing, that is the way to invert the cosine distance ",
"(put differently, cosine distance is 1-cosine similarity, which is a metric in Euclidean space due to the properties of the dot product). ",
"Another example, \"are obtained using the GloVe vector, not using PPMI\" ",
"- there are close relationships between what GloVe learns and PPMI, ",
"which the authors seem unaware of (see e.g. the GloVe paper and Omer Levy's work).",
"4. Then there is the additional question, why should we care? ",
"The paper does not really motivate why it is important to score well on these tests: ",
"these kinds of tests are often used as ways to measure the quality of word embeddings, ",
"but in this case the main contribution is the similarity metric used *on top* of the word embeddings. ",
"In other words, what is supposed to be the take-away, and why should we care?",
"As such, I do not recommend it for acceptance - ",
"it needs significant work before it can be accepted at a conference.",
"Minor points:- Typo in Eq 10",
"- Typo on page 6 (/cite instead of \\cite)"
] | [
"fact",
"fact",
"evaluation",
"evaluation",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"quote",
"fact",
"fact",
"quote",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"request",
"evaluation",
"evaluation",
"fact",
"fact"
] |
H1E1RgqxM | [
"# Summary This paper presents a new external-memory-based neural network (Neural Map) for handling partial observability in reinforcement learning. ",
"The proposed memory architecture is spatially-structured so that the agent can read/write from/to specific positions in the memory. ",
"The results on several memory-related tasks in 2D and 3D environments show that the proposed method outperforms existing baselines such as LSTM and MQN/FRMQN. ",
"[Pros] - The overall direction toward more flexible/scalable memory is an important research direction in RL.",
"- The proposed memory architecture is new. ",
"- The paper is well-written.",
"[Cons] - The proposed memory architecture is new but a bit limited to 2D/3D navigation tasks.",
"- Lack of analysis of the learned memory behavior.",
"# Novelty and Significance The proposed idea is novel in general. ",
"Though [Gupta et al.] proposed an ego-centric neural memory in the RL context, ",
"the proposed memory architecture is still new in that read/write operations are flexible enough for the agent to write any information to the memory, ",
"whereas [Gupta et al.] designed the memory specifically for predicting free space. ",
"On the other hand, the proposed method is also specific to navigation tasks in 2D or 3D environment, ",
"which is hard to apply to more general memory-related tasks in non-spatial environments. ",
"But, it is still interesting to see that the ego-centric neural memory works well on challenging tasks in a 3D environment.",
"# Quality The experiment does not show any analysis of the learned memory read/write behavior especially for ego-centric neural map and the 3D environment. ",
"It is hard to understand how the agent utilizes the external memory without such an analysis. ",
"# Clarity The paper is overall clear and easy-to-follow except for the following. ",
"In the introduction section, the paper claims that \"the expert must set M to a value that is larger than the time horizon of the currently considered task\" when mentioning the limitation of the previous work. ",
"In some sense, however, Neural Map also requires an expert to specify the proper size of the memory based on prior knowledge about the task."
] | [
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation"
] |
HJUMdjteM | [
"The authors propose a model for learning physical interaction skills through trial and error.",
"They use end-to-end deep reinforcement learning - the DQN model - including the task goal as an input in order to to improve generalization over several tasks, and shaping the reward depending on the visual differences between the goal state and the current state.",
"They show that the task performance of their model is better than the DQN on two simulated tasks.",
"The paper is well-written, clarity is good,",
"it could be slightly improved by updating the title \"Toy example with Goal integration\" to make it consistent with the naming \"navigation task\" used elsewhere.",
"If the proposed model is new given the reviewer's knowledge, the contribution is small.",
"The biggest change compared to the DQN model is the addition of information in the input.",
"The authors initially claim that \"In this paper, [they] study how an artificial agent can autonomously acquire this intuition through interaction with the environment\",",
"however the proposed tasks present little to no realistic physical interaction:",
"the navigation task is a toy problem where no physics is simulated.",
"In the stacking task, only part of the simulation actually use the physical simulation result.",
"Given that machine learning methods are in general good at finding optimal policies that exploit simulation limitations,",
"this problem seems a threat to the significance of this work.",
"The proposed GDQN model shows better performance than the DQN model.",
"However, as the authors do not provide in-depth analysis of what the network learns (e.g. by testing policies in the absence of an explicit goal),",
"it is difficult to judge if the network learnt a meaningful representation of the world's physics.",
"This limitation along with potential other are not discussed in the paper.",
"Finally, more than a third (10/26) references point to Arxiv papers.",
"Despite Arxiv definitely being an important tool for paper availability, it is not peer-reviewed and there are also work that are non-finished or erroneous.",
"It is thus a necessary condition that all Arxiv references are replaced by the peer-reviewed material when it exist (e.g. Lerer 2016 in ICML or Denil 2016 in ICLR 2017), once again to strengthen the author's claim."
] | [
"fact",
"fact",
"fact",
"evaluation",
"request",
"evaluation",
"evaluation",
"quote",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"evaluation",
"evaluation",
"fact",
"fact",
"request"
] |
S1uLIj8lG | [
"* sec.2.2 is about label-preserving translation ",
"and many notations are introduced. ",
"However, it is not clear what label here refers to, ",
"and it does not shown in the notation so far at all. ",
"Only until the end of sec.2.2, the function F(.) is introduced and its revelation - Google Search as label function is discussed only at Fig.4 and sec.2.3.",
"* pp.5 first paragraph: when assuming D_X and D_Y being perfect, why L_GAN_forward = L_GAN_backward = 0? ",
"To trace back, in fact it is helpful to have at least a simple intro/def. to the functions D(.) and G(.) of Eq.(1). ",
"* Somehow there is a feeling that the notations in sec.2.1 and sec.2.2 are not well aligned. ",
"It is helpful to start providing the math notations as early as sec.2.1, ",
"so labels, pseudo labels, the algorithm illustrated in Fig.2 etc. can be consistently integrated with the rest notations. ",
"* F() is firstly shown in Fig.2 the beginning of pp.3, and is mentioned in the main text as late as of pp.5.",
"* Table 2: The CNN baseline gives an error rate of 7.80 ",
"while the proposed variants are 7.73 and 7.60 respectively. ",
"The difference of 0.07/0.20 are not so significant. ",
"Any explanation for that?",
"Minor issues: * The uppercase X in the sentence before Eq.(2) should be calligraphic X"
] | [
"fact",
"evaluation",
"evaluation",
"fact",
"fact",
"request",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"request",
"request"
] |
Byz0IGvgz | [
"This paper combines the tensor contraction method and the tensor regression method and applies them to CNN.",
"This paper is well written and easy to read.",
"However, I cannot find a strong or unique contribution from this paper.",
"Both of the methods (tensor contraction and tensor decomposition) are well developed in the existing studies,",
"and combining these ideas does not seem non-trivial.",
"--Main question Why authors focus on the combination of the methods?",
"Both of the two methods can perform independently.",
"Is there a special synergy effect?",
"--Minor question The performance of the tensor contraction method depends on a size of tensors.",
"Is there any effective way to determine the size of tensors?"
] | [
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"non-arg",
"fact",
"non-arg"
] |
rk156h2gf | [
"The manuscript proposes a new framework for inference in RNN based upon the Bayes by Backprop (BBB) algorithm. ",
"In particular, the authors propose a new framework to \"sharpen\" the posterior.",
"In particular, the hierarchical prior in (6) and (7) frame an interesting modification to directly learning a multivariate normal variational approximation. ",
"In the experimental results, it seems clear that this approach is beneficial, but it's not clear as to why. ",
"In particular, how does the variational posterior change as a result of the hierarchical prior? ",
"It seems that (7) would push the center of the variational structure back towards the MAP point and reduces the variance of the output of the hierarchical prior; ",
"however, with the two layers in the prior it's unclear what actually is happening. ",
"Carefully explaining *what* the authors believe is happening and exploring how it changes the variational approximation in a classic modeling framework would be beneficial to understanding the proposed change and evaluating it. ",
"As a final point, the authors state, \"as long as the improvement along the gradient is great than the KL loss incurred...this method is guaranteed to make progress towards optimizing L.\" ",
"Do the authors mean that the negative log-likelihood will be improved in this case? ",
"Or the actual optimization? ",
"Improving the negative log-likelihood seems straightforward, ",
"but I am confused by what the authors mean by optimization.",
"The new evaluation metric proposed in Section 6.1.1 is confusing, ",
"and I do not understand what the metric is trying to capture. ",
"This needs significantly more detail and explanation. ",
"Also, it is unclear to me what would happen when you input data examples that are opposite to the original input sequence; ",
"in particular, for many neural networks the predictions are unstable outside of the input domain and inputting infeasible data leads to unusable outputs. ",
"It's completely feasible that these outputs would just be highly uncertain, ",
"and I'm not sure how you can ascribe meaning to them. ",
"The authors should not compare to the uniform prior as a baseline for entropy. ",
"It's much more revealing to compare it to the empirical likelihoods of the words."
] | [
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"request",
"quote",
"request",
"request",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"request",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"request",
"request"
] |
ryIbx22yz | [
"The authors perform a set of experiments in which they inspect the Hessian matrix of the loss of a neural network, and observe that most of the eigenvalues are very close to zero. ",
"This is a potentially important observation, ",
"and the experiments were well worth performing, ",
"but I don't find them fully convincing ",
"(partly because I was confused by the presentation).",
"They perform four sets of experiments:",
"1) In section 3.1, they show on simulated data that for data drawn from k clusters, there are roughly k significant eigenvalues in the Hessian of the solution.",
"2) In section 3.2, they show on MNIST that the solution contains few large eigenvalues, and also that there are negative eigenvalues.",
"3) In section 3.3, they show (again on MNIST) that at their respective solutions, large batch and small batch methods find solutions with similar numbers of large eigenvalues, but that for the large batch method the magnitudes are larger.",
"4) In section 4.1, they train (on CIFAR10) using a large batch method, and then transition to a small batch method, and argue that the second solution appears to be better than the first, but that they are a part of the same basin ",
"(since linearly while interpolating between them they don't run into any barriers).",
"I'm not fully convinced by the second and third experiments, ",
"partly because I didn't fully understand the plots (more on this below), ",
"but also because it isn't clear to me what we should expect from the spectrum of a Hessian, ",
"so I don't know whether the observed specra have fewer large eigenvalues, or more large eigenvalues, then would be \"natural\". ",
"In other words, there isn't a *baseline*.",
"For the fourth experiment, it's unsurprising that the small batch method winds up in a different location in the same basin as the large batch method, ",
"since it was initialized to the large batch method's solution ",
"(and it doesn't appear to me, in figure 9, that the small batch solution is significantly different).",
"Section 2.1 is said to contain an argument that the second term of equation 5 can be ignored, but only says that if \\ell' and \\nabla^2 of f are uncorrelated, then it can be ignored. ",
"I don't see any reason that these two quantities should be correlated, ",
"but this is not an argument that they are uncorrelated. ",
"Also, it isn't clear to me where this approximation was used--everywhere? ",
"In section 3.2, it sounds as if the exact Hessian is used, ",
"and at the end of this section the authors say that figure 6 demonstrates that the effect of this second term is small, ",
"but I don't see why this is, ",
"and it isn't explained.",
"My main complaint is that I had a great deal of difficulty interpreting the plots: ",
"it often wasn't clear to me what exactly was being plotted, ",
"and most of the language describing them was frustratingly vague. ",
"For example, figure 6 is captioned \"left edge of the spectrum, eigenvalues are scaled by their ratio\". ",
"The text explains that \"left edge of the spectrum\" means \"small but negative eigenvalues\" ",
"(this would be better in the caption), ",
"but what are the ratios? ",
"Ratio of what to what? ",
"I think it would greatly enhance clarity if every plot caption described exactly, and unambiguously, what quantities were plotted on the horizontal and vertical axes.",
"Some minor notes:There are a number of places where \"it's\" is used, where it should be \"its\".",
"In the introduction, the definition of \\mathcal{L}' is slightly confusing, ",
"since it's an expectation, ",
"but the use of \"'\" makes one expect a derivative. ",
"Perhaps use \\hat{\\mathcal{L}} for the empirical loss, and \\mathcal{L} for the expected one?",
"On the bottom of page 4, \"if \\ell' and \\nabla f are not correlated\": I think the \\nabla should be \\nabla^2.",
"It's \"principal components\", not \"principle components\"."
] | [
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"fact",
"evaluation",
"fact",
"fact",
"non-arg",
"evaluation",
"evaluation",
"fact",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"request",
"request",
"evaluation",
"fact",
"evaluation",
"request",
"request",
"request"
] |
Hy4tIW5xf | [
"The paper \"IMPROVING SEARCH THROUGH A3C REINFORCEMENT LEARNING BASED CONVERSATIONAL AGENT\" proposes to define an agent to guide users in information retrieval tasks.",
"By proposing refinements of the query, categorizations of the results or some other bookmarking actions, the agent is supposed to help the user in achieving his search.",
"The proposed agent is learned via reinforcement learning.",
"My concern with this paper is about the experiments that are only based on simulated agents, as it is the case for learning.",
"While it can be questionable for learning",
"(but we understand why it is difficult to overcome),",
"it is very problematic for the experiments to not have anything that demonstrates the usability of the approach in a real-world scenario.",
"I have serious doubts about the performances of such an artificially learned approach for achieving real-world search tasks.",
"Also, for me the experimental section is not sufficiently detailed, which lead to not reproducible results.",
"Moreover, authors should have considered baselines",
"(only the two proposed agents are compared which is clearly not sufficient).",
"Also, both models have some issues from my point of view.",
"First, the Q-learning methods looks very complex:",
"how could we expect to get an accurate model with 10^7 states ?",
"No generalization about the situations is done here,",
"examples of trajectories have to be collected for each individual considered state,",
"which looks very huge (especially if we think about the number of possible trajectories in such an MDP).",
"The second model is able to generalize from similar situations thanks to the neural architecture that is proposed.",
"However, I have some concerns about it:",
"why keeping the history of actions in the inputs since it is captured by the LSTM cell ?",
"It is a redondant information that might disturb the process.",
"Secondly, the proposed loss looks very heuristic for me,",
"it is difficult to understand what is really optimized here.",
"Particularly, the loss entropy function looks strange to me.",
"Is it classical ?",
"Are there some references of such a method to maintain some exploration ability.",
"I understand the need of exploration,",
"but including it in the loss function reduces the interpretability of the objective",
"(wouldn't it be preferable to use a more classical loss but with an epsilon greedy policy?).",
"Other remarks: - In the begining of \"varying memory capacity\" section, what is \"100, 150 and 250\" ?",
"Time steps ?",
"What is the unit ?",
"Seconds ?",
"- I did not understand the \"Capturing seach context at local and global level\" at all",
"- In the loss entropy formula, the two negation signs could be removed",
"- Wouldn't it be possible to use REINFORCE or other policy gradient method rather than roll-outs used in the paper (which lead to biased gradient updates) ?"
] | [
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"request",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"request",
"request",
"evaluation",
"evaluation",
"evaluation",
"non-arg",
"non-arg",
"non-arg",
"non-arg",
"evaluation",
"request",
"request"
] |
H1rLr8ZNM | [
"This paper proposed to combine three kinds of data sources: real, simulated and unlabeled, to help solve \"small\" data issue occurring in packet stream.",
"A directed information flow graph was constructed,",
"a multi-headed network was trained by using Keras and GAN library.",
"Its use on the packet sequence classification can archive comparable accuracy while relieve operation engineers from heavy background learning.",
"The presentation of this paper can be improved.",
"* With the missing citations as \"(?)\" and not clearly defined concepts, including property of function H (any function? convex?) in (3),",
"full name of TCP/abbr of GAN when first appear, etc.",
"reader might need to make guesses to follow.",
"* P2: You can draw your audience by expanding the \"related work\" like a story:",
"more background of GAN etc. and one or two highlight formula to help clear the idea",
"* P3: What's the purpose of inserting \"dummy packets to denote the timeout between two time stamps\"?",
"* P3: Help sell to \"non-engineer\" by maybe having image example or even plainer language to describe the meaning (deep difference/purpose) of \"3 levels of feature engineering\"; and when addressing features, mentioned as 1,2,3, while in Table 1, shown as Feature=0,1,2;",
"* P6: section 4.2 mentioned \"only metrics cared by operators\", is this what you mean by \"relieve operation engineers ...\",",
"and which is or how to decide the cutoff accuracy the engineers should make a Go or No Go decision?"
] | [
"fact",
"fact",
"fact",
"evaluation",
"request",
"request",
"request",
"evaluation",
"request",
"request",
"non-arg",
"request",
"non-arg",
"non-arg"
] |
HJNeoqYNG | [
"This paper focuses on the problem of \\\"machine teaching\\\", ",
"i.e., how to select a good strategy to select training data points to pass to a machine learning algorithm, for faster learning. ",
"The proposed approach leverages reinforcement learning by defining the reward as how fast the learner learns, ",
"and use policy gradient to update the teacher parameters. ",
"I find the definition of the \\\"state\\\" in this case very interesting. ",
"The experimental results seem to show that such a learned teacher strategy makes machine learning algorithms learn faster. ",
"\\n\\nOverall I think that this paper is decent. ",
"The angle the authors took is interesting (essentially replacing one level of the bi-level optimization problem in machine teaching works with a reinforcement learning setup). ",
"The problem formulation is mostly reasonable, ",
"and the evaluation seems quite convincing. ",
"The paper is well-written: ",
"I enjoyed the mathematical formulation (Section 3). ",
"The authors did a good job of using different experiments (filtration number analysis, and teaching both the same architecture and a different architecture) to intuitively explain what their method actually does. ",
"\\n\\nAt the same time, though, I see several important issues that need to be addressed if this paper is to be accepted. ",
"Details below. \\n\\n1. As much as I enjoyed reading Section 3, it is very redundant. ",
"In some cases it is good to outline a powerful and generic framework",
"(like the authors did here with defining \\\"teaching\\\" in a very broad sense, including selecting good loss functions and hypothesis spaces) ",
"and then explain that the current work focuses on one aspect (selecting training data points). ",
"However, I do not see it being the case here. ",
"In my opinion, selecting good loss functions and hypothesis spaces are much harder problems than data teaching - except maybe when one use a pre-defined set of possible loss functions and select from it. ",
"But that is not very interesting",
"(if you can propose new loss functions, that would be way cooler). ",
"I also do not see how to define an intuitive set of \\\"states\\\" in that case. ",
"Therefore, I think this section should be shortened. ",
"I also think that the authors should not discuss the general framework and rather focus on \\\"data teaching\\\",",
"which is the only focus of the current paper. ",
"The abstract and introduction should also be modified accordingly to more honestly reflect the current contributions. ",
"\\n2. The authors should do a better job at explaining the details of the state definition,",
"especially the student model features and the combination of data and current learner model. ",
"\\n3. There is only one definition of the reward – related to batch number when the accuracy first exceeds a threshold. ",
"Is accuracy stable,",
"can it drop back down below the threshold in the next epoch? ",
"The accuracy on a held-out test set is not guaranteed to be monotonically increasing, right? ",
"Is this a problem in practice (it seems to happen on your curves)? ",
"What about other potential reward definitions? ",
"And what would they potentially lead to? ",
"\\n4. Experimental results are averaged over 5 repeated runs ",
"- a bit too small in my opinion. ",
"\\n5. Can the authors show convergence of the teacher parameter \\\\theta? ",
"I think it is important to see how fast the teacher model converges, too. ",
"\\n6. In some of your experiments, every training method converges to the same accuracy after enough training (Fig.2b), while in others, not quite (Fig. 2a and 2c). ",
"Why is this the case? ",
"Does it mean that you have not run enough iterations for the baseline methods? ",
"My intuition is that if the learner algorithm is convex, then ultimately they will all get to the same accuracy level, ",
"so the task is just to get there quicker. ",
"I understand that since the learner algorithm is an NN, this is not the case – ",
"but more explanation is necessary here – ",
"does your method also reduces the empirical possibility to get stuck in local minima? ",
"\\n7. More explanation is needed towards Fig.4c. ",
"In this case, using a teacher model trained on a harder task (CIFAR10) leads to much improved student training on a simpler task (MNIST). Why?\\n ",
"8. Although in terms of \\\"effective training data points\\\" the proposed method outperforms the other methods,",
"in terms of time (Fig.5) the difference between it and say, NoTeach, is not that significant (especially at very high desired accuracy). ",
"More explanation needed here. ",
"\\n\\nRead the rebuttal and revision and slightly increased my rating."
] | [
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"request",
"fact",
"request",
"evaluation",
"evaluation",
"evaluation",
"request",
"evaluation",
"request",
"request",
"evaluation",
"request",
"request",
"request",
"fact",
"non-arg",
"request",
"non-arg",
"non-arg",
"non-arg",
"non-arg",
"fact",
"evaluation",
"request",
"request",
"fact",
"non-arg",
"non-arg",
"evaluation",
"evaluation",
"evaluation",
"request",
"non-arg",
"request",
"non-arg",
"fact",
"evaluation",
"request",
"non-arg"
] |
BkaINb9xz | [
"The authors propose an extension to CNN using an autoregressive weighting for asynchronous time series applications.",
"The method is applied to a proprietary dataset as well as a couple UCI problems and a synthetic dataset, showing improved performance over baselines in the asynchronous setting.",
"This paper is mostly an applications paper.",
"The method itself seems like a fairly simple extension for a particular application,",
"although perhaps the authors have not clearly highlighted details of methodological innovation.",
"I liked that the method was motivated to solve a real problem, and that it does seem to do so well compared to reasonable baselines.",
"However, as an an applications paper, the bread of experiments are a little bit lacking",
"-- with only that one potentially interesting dataset, which happens to proprietary.",
"Given the fairly empirical nature of the paper in general, it feels like a strong argument should be made, which includes experiments, that this work will be generally significant and impactful.",
"The writing of the paper is a bit loose with comments like:",
"“Besides these and claims of secretive hedge funds (it can be marketing surfing on the deep learning hype), no promising results or innovative architectures were publicly published so far, to the best of our knowledge.”",
"Parts of the also appear rush written, with some sentences half finished:",
"“\"ues of x might be heterogenous, hence On the other hand, significance network provides data-dependent weights for all regressors and sums them up in autoregressive manner.””",
"As a minor comment, the statement",
"“however, due to assumed Gaussianity they are inappropriate for financial datasets, which often follow fat-tailed distributions (Cont, 2001).”",
"Is a bit too broad.",
"It depends where the Gaussianity appears.",
"If the likelihood is non-Gaussian, then it often doesn’t matter if there are latent Gaussian variables."
] | [
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"request",
"evaluation",
"quote",
"evaluation",
"quote",
"non-arg",
"quote",
"evaluation",
"fact",
"fact"
] |
Hk0lS3teG | [
"The authors analyze show theoretical shortcomings in previous methods of explaining neural networks and propose an elegant way to remove these shortcomings in their methods PatternNet and PatternAttribution.",
"The quest of visualizing neural network decision is now a very active field with many contributions.",
"The contribution made by the authors stands out due to its elegant combination of theoretical insights and improved performance in application.",
"The work is very detailed and reads very well.",
"I am missing at least one figure with comparison with more state-of-the-art methods",
"(e.g. I would love to see results from the method by Zintgraf et al. 2017 which unlike all included prior methods seems to produce much crisper visualizations and also is very related because it learns from the data, too).",
"Minor questions and comments:* Fig 3: Why is the random method so good at removing correlation from fc6?",
"And the S_w even better?",
"Something seems special about fc6.",
"* Fig 4: Why is the identical estimator better than the weights estimator and that one better than S_a?",
"* It would be nice to compare the image degradation experiment with using the ranking provided by the work from Zintgraf which should by definition function as a kind of gold standard",
"* Figure 5, 4th row (mailbox): It looks like the umbrella significantly contributes to the network decision to classify the image as \"mailbox\" which doesn't make too much sense.",
"Is is a problem of the visualization (maybe there is next to no weight on the umbrella), of PatternAttribution or a strange but interesting a artifact of the analyzed network?",
"* page 8 \"... closed form solutions (Eq (4) and Eq. (7))\"",
"The first reference seems to be wrong.",
"I guess Eq 4. should instead reference the unnumbered equation after Eq. 3."
] | [
"fact",
"evaluation",
"evaluation",
"evaluation",
"request",
"request",
"request",
"request",
"evaluation",
"request",
"request",
"evaluation",
"evaluation",
"quote",
"fact",
"request"
] |
B1Fe0Zqxz | [
"The paper presents a way to regularize a sequence generator by making the hidden states also predict the hidden states of an RNN working backward.",
"Applied to sequence-to-sequence networks, the approach requires training one encoder, and two separate decoders, that generate the target sequence in forward and reversed orders. ",
"A penalty term is added that forces an agreement between the hidden states of the two decoders. ",
"During model evaluation only the forward decoder is used, with the backward operating decoder discarded. ",
"The method can be interpreted to generalize other recurrent network regularizers, such as putting an L2 loss on the hidden states.",
"Experiments indicate that the approach is most successful when the regularized RNNs are conditional generators, which emit sequences of low entropy, such as decoders of a seq2seq speech recognition network. ",
"Negative results were reported when the proposed regularization technique was applied to language models, whose output distribution has more entropy.",
"The proposed regularization is evaluated with positive results on a speech recognition task and on an image captioning task, and with negative results (no improvement, but also no deterioration) on a language modeling and sequential MNIST digit generation tasks.",
"I have one question about baselines: is the proposed approach better than training to forward generators and force an agreement between them (in the spirit of the concurrent ICLR submission https://openreview.net/forum?id=rkr1UDeC-)? ",
"Also, would using the backward RNN, e.g. for rescoring, bring another advantage? ",
"In other words, what is (and is there) a gap between an ensemble of a forward and backward rnn and the forward-rnn only, but trained with the state-matching penalty?",
"Quality:The proposed approach is well motivated ",
"and the experiments show the limits of applicability range of the technique.",
"Clarity:The paper is clearly written.",
"Originality:The presented idea seems novel.",
"Significance:The method may prove to be useful to regularize recurrent networks, ",
"however I would like to see a comparison with ensemble methods. ",
"Also, as the authors note the method seems to be limited to conditional sequence generators.",
"Pros and cons:Pros: the method is simple to implement, ",
"the paper lists for what kind of datasets it can be used.",
"Cons: the method needs to be compared with typical ensembles of models going only forward in time, ",
"it may turn that it using the backward RNN is not necessary"
] | [
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"non-arg",
"non-arg",
"non-arg",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"request",
"fact",
"evaluation",
"fact",
"request",
"evaluation"
] |
HydgKG5ez | [
"The paper proposes a CNN-based based approach for speech processing using raw waveforms as input. ",
"An analysis of convolution and pooling layers applied on waveforms is first presented. ",
"An architecture called SimpleNet is then presented and evaluated on two speech tasks: emotion recognition and gender classification. ",
"This paper propose a theoretical analysis of convolution and pooling layers to motivate the SimpleNet architecture. ",
"To my understanding, the analysis is flawed (see comments below). ",
"The SimpleNet approach is interesting but not sufficiently backed with experimental results. ",
"The network analysis is minimal and provides almost no insights. ",
"I therefore recommend to reject the paper. ",
"Detailed comments:Section 1:* “Therefore, it remains unknown what actual features CNNs learn from waveform”. ",
"This is not true, ",
"several works on speech recognition have shown that a convolution layer taking raw speech as input can be seen as a bank of learned filters. ",
"For instance in the context of speech recognition, [9] showed that the filters learn phoneme-specific responses, ",
"[10] showed that the learned filters are close to Mel filter banks ",
"and [7] showed that the learned filters are related to MRASTA features and Gabor filters. ",
"The authors should discuss these previous works in the paper.",
"Section 2:* Section 2.1 seems unnecessary, ",
"I think it’s safe to assume that the Shannon-Nyquist theorem and the definition of convolution are known by the reader.",
"* Section 2.2.2 & 2.2.3: I don't follow the justification that stacking convolutions are not needed: ",
"the example provided is correct if two convolutions are directly stacked without non-linearity, but the conclusion does not hold with a non-linearity and/or a pooling layer between the convolutions: ",
"two stacked convolutions with non-linearities are not equivalent to a single convolution. ",
"To my understanding, the same problem is present for the pooling layer: ",
"the presented conclusion that pooling introduces aliasing is only valid for two directly stacked pooling layers and is not correct for stacked blocks of convolution/pooling/non-linearity.",
"* Section 2.2.5: The ReLU can be seen as a half-wave rectifier if it is applied directly to the waveform. ",
"However, it is usually not the case ",
"as it is applied on the output of the convolution and/or pooling layers. Therefore I don’t see the point of this section. ",
"* Section 2.2.6: In this section, the authors discuss the differences between spectrogram-based and waveforms-based approaches, assuming that spectrogram-based approach have fixed filters. ",
"But spectrogram can also be used as input to CNNs (i.e. using learned filters) for instance in speech recognition [1] or emotion recognition [11]. ",
"Thus the comparison could be more interesting if it was between spectrogram-based and raw waveform-based approaches when the filters are learned in both cases. ",
"Section 3:* Figure 4 is very interesting, ",
"and is in my opinion a stronger motivation for SimpleNet that the analysis presented in Section 2.",
"* Using known filterbanks such as Mel or Gammatone filters as initialization point for the convolution layer is not novel and has been already investigated in [7,8,10] in the context of speech recognition. ",
"Section 4:* On emotion recognition, the results show that the proposed approach is slightly better, ",
"but there is some issues: the average recall metric is usually used for this task due to class imbalance (see [1] for instance). ",
"Could the authors provide results with this metric ? ",
"Also IEMOCAP is a well-used corpus for this task, ",
"could the authors provide some baselines performance for comparison (e.g. [11]) ? ",
"* For gender classification, there is no gain from SimpleNet compared to the baselines. ",
"The authors also mention that some utterances have overlapping speech. ",
"These utterances are easy to find from the annotations provided with the corpus, ",
"so it should be easy to remove them for the train and test set. ",
"Overall, in the current form, the results are not convincing.",
"* Section 4.3: The analysis is minimal: ",
"it shows that filters changed after training (as already presented in Figure 4). ",
"I don't follow completely the argument that the filters should focus on low frequency. ",
"It is more informative, ",
"but one could expect that the filters will specialized, thus some of them will focus on high frequencies, to model the high frequency events such as consonants or unvoiced event. ",
"It could be very interesting to relate the learned filters to the labels: ",
"are some filters learned to model specific emotions ? ",
"For gender classification, are some filters focusing on the average pitch frequency of male and female speaker ?",
"* Finally, it would be nice to see if the claims in Section 2 about the fact that only one convolution layer is needed and that stacking pooling layers can hurt the performance are verified experimentally: for instance, experiments with more than one pair of convolution/pooling could be presented.",
"Minor comments:* More references for raw waveforms-based approach for speech recognition should be added [3,4,6,7,8,9] in the introduction.",
"* I don’t understand the first sentence of the paper: “In the field of speech and audio processing, due to the lack of tools to directly process high dimensional data …”. ",
"Is this also true for any pattern recognition fields ? ",
"* For the MFCCs reference in 2.2.2, the authors should cite [12].",
"* Figure 6: Only half of the spectrum should be presented.",
"References: [1] H. Lee, P. Pham, Y. Largman, and A. Y. Ng. Unsupervised feature learning for audio classification using convolutional deep belief networks. In Advances in Neural Information Processing Systems 22, pages 1096–1104, 2009.",
"[2] Schuller, Björn, Stefan Steidl, and Anton Batliner. \"The interspeech 2009 emotion challenge.\" Tenth Annual Conference of the International Speech Communication Association. 2009.",
"[3] N. Jaitly, G. Hinton, Learning a better representation of speech sound waves using restricted Boltzmann machines, in: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2011, pp. 5884–5887.",
"[4] D. Palaz, R. Collobert, and M. Magimai.-Doss. Estimating Phoneme Class Conditional Probabilities from Raw Speech Signal using Convolutional Neural Networks, INTERSPEECH 2013, pages 1766–1770.",
"[5] Van den Oord, Aaron, Sander Dieleman, and Benjamin Schrauwen. \"Deep content-based music recommendation.\" Advances in neural information processing systems. 2013.",
"[6] Z.Tuske, P.Golik, R.Schluter, H.Ney, Acoustic Modeling with Deep Neural Networks Using Raw Time Signal for LVCSR, in: Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), Singapore, 2014, pp. 890–894.",
"[7] P. Golik, Z. Tuske, R. Schlu ̈ter, H. Ney, Convolutional Neural Networks for Acoustic Modeling of Raw Time Signal in LVCSR, in: Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2015, pp. 26–30.",
"[8] Yedid Hoshen and Ron Weiss and Kevin W Wilson, Speech Acoustic Modeling from Raw Multichannel Waveforms, International Conference on Acoustics, Speech, and Signal Processing, 2015.",
"[9] D. Palaz, M. Magimai-Doss, and R. Collobert. Analysis of CNN-based Speech Recognition System using Raw Speech as Input, INTERSPEECH 2015, pages 11–15.",
"[10] T. N. Sainath, R. J. Weiss, A. Senior, K. W. Wilson, and O. Vinyals. Learning the Speech Front-end With Raw Waveform CLDNNs. Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2015.",
"[11] Satt, Aharon & Rozenberg, Shai & Hoory, Ron. (2017). Efficient Emotion Recognition from Speech Using Deep Learning on Spectrograms. 1089-1093. Interspeech 2017.",
"[12] S. Davis and P. Mermelstein. Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Transactions on Acoustics, Speech and Signal Processing, 28(4):357–366, 1980."
] | [
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"quote",
"fact",
"fact",
"fact",
"fact",
"fact",
"request",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"request",
"evaluation",
"evaluation",
"fact",
"evaluation",
"fact",
"request",
"evaluation",
"request",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"request",
"request",
"request",
"request",
"request",
"evaluation",
"evaluation",
"request",
"request",
"reference",
"reference",
"reference",
"reference",
"reference",
"reference",
"reference",
"reference",
"reference",
"reference",
"reference",
"reference"
] |
S1SG_l5gz | [
"This paper proposes to automatically recognize domain names as malicious or benign by deep networks (convnets and RNNs) trained to directly classify the character sequence as such.",
"Pros The paper addresses an important application of deep networks, comparing the performance of a variety of different types of model architectures.",
"The tested networks seem to perform reasonably well on the task.",
"Cons There is little novelty in the proposed method/models ",
"-- the paper is primarily focused on comparing existing models on a new task.",
"The descriptions of the different architectures compared are overly verbose ",
"-- they are all simple standard convnet / RNN architectures. ",
"The code specifying the models is also excessive for the main text ",
"-- it should be moved to an appendix or even left for a code release.",
"The comparisons between various architectures are not very enlightening ",
"as they aren’t done in a controlled way ",
"-- there are a large number of differences between any pair of models ",
"so it’s hard to tell where the performance differences come from. ",
"It’s also difficult to compare the learning curves among the different models (Fig 1) ",
"as they are in separate plots with differently scaled axes.",
"The proposed problem is an explicitly adversarial setting ",
"and adversarial examples are a well-known issue with deep networks and other models, ",
"but this issue is not addressed or analyzed in the paper. ",
"(In fact, the intro claims this is an advantage of not using hand-engineered features for malicious domain detection, seemingly ignoring the literature on adversarial examples for deep nets.) ",
"For example, in this case an attacker could start with a legitimate domain name and use black box adversarial attacks (or white box attacks, given access to the model weights) to derive a similar domain name that the models proposed here would classify as benign.",
"While this paper addresses an important problem, ",
"in its current form the novelty and analysis are limited ",
"and the paper has some presentation issues."
] | [
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"request",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation"
] |
BJQGTw5lM | [
"This manuscript explores the idea of adding noise to the adversary's play in GAN dynamics over an RKHS. ",
"This is equivalent to adding noise to the gradient update, using the duality of reproducing kernels. ",
"Unfortunately, the evaluation here is wholly unsatisfactory to justify the manuscript's claims. ",
"No concrete practical algorithm specification is given (only a couple of ideas to inject noise listed), ",
"only a qualitative one on a 2-dimensional latent space in MNIST, and an inconclusive one using the much-doubted Parzen window KDE method. ",
"The idea as stated in the abstract and introduction may well be worth pursuing, ",
"but not on the evidence provided by the rest of the manuscript."
] | [
"fact",
"fact",
"evaluation",
"fact",
"fact",
"evaluation",
"evaluation"
] |
SyuPmP3lM | [
"The collaborative block that authors propose is a generalized module that can be inserted in deep architectures for better multi-task learning.",
"The problem is relevant as we are pushing deep networks to learn representation for multiple tasks.",
"The proposed method while simple is novel.",
"The few places where the paper needs improvement are: 1. The authors should test their collaborative block on multiple tasks where the tasks are less related.",
"Ex: Scene and object classification.",
"The current datasets where the model is evaluated is limited to Faces which is a constrained setting.",
"It would be great if Authors provide more experiments beyond Faces to test the universality of the proposed approach.",
"2. The Face datasets are rather small.",
"I wonder if the accuracy improvements hold on larger datasets and if authors can comment on any large scale experiments they have done using the proposed architecture.",
"In it's current form I would say the experiment section and large scale experiments are two places where the paper falls short."
] | [
"fact",
"fact",
"evaluation",
"request",
"request",
"fact",
"request",
"request",
"request",
"evaluation"
] |
B1ja8-9lf | [
"This paper presents a novel approach to calibrate classifiers for out of distribution samples.",
"In additional to the original cross entropy loss, the “confidence loss” was proposed to guarantee the out of distribution points have low confidence in the classifier.",
"As out of distribution samples are hard to obtain,",
"authors also propose to use GAN generating “boundary” samples as out of distribution samples.",
"The problem setting is new and objective (1) is interesting and reasonable.",
"However, I am not very convinced that objective (3) will generate boundary samples.",
"Suppose that theta is set appropriately so that p_theta (y|x) gives a uniform distribution over labels for out of distribution samples.",
"Because of the construction of U(y), which uniformly assign labels to generated out of distribution samples,",
"the conditional probability p_g (y|x) should always be uniform so p_g (y|x) divided by p_theta (y|x) is almost always 1.",
"The KL divergence in (a) of (3) should always be approximately 0 no matter what samples are generated.",
"I also have a few other concerns: 1. There seems to be a related work:",
"[1] Perello-Nieto et al., Background Check: A general technique to build more reliable and versatile classifiers, ICDM 2016,",
"Where authors constructed a classifier, which output K+1 labels and the K+1-th label is the “background noise” label for this classification problem.",
"Is the method in [1] applicable to this paper’s setting?",
"Moreover, [1] did not seem to generate any out of distribution samples.",
"2. I am not so sure that how the actual out of distribution detection was done",
"(did I miss something here?).",
"Authors repeatedly mentioned “maximum prediction values”,",
"but it was not defined throughout the paper.",
"Algorithm 1. is called “minimization for detection and generating out of distribution (samples)”,",
"but this is only gradient descent, right?",
"I do not see a detection procedure.",
"Given the title also contains “detecting”, I feel authors should write explicitly how the detection is done in the main body."
] | [
"fact",
"fact",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"fact",
"reference",
"fact",
"request",
"evaluation",
"evaluation",
"non-arg",
"fact",
"fact",
"fact",
"fact",
"fact",
"request"
] |
Bk-6h6Txz | [
"The article \"Contextual Explanation Networks\" introduces the class of models which learn the intermediate explanations in order to make final predictions.",
"The contexts can be learned by, in principle, any model including neural networks,",
"while the final predictions are supposed to be made by some simple models like linear ones.",
"The probabilistic model allows for the simultaneous training of explanation and prediction parts as opposed to some recent post-hoc methods.",
"The experimental part of the paper considers variety of experiments, including classification on MNIST, CIFAR-10, IMDB and also some experiments on survival analysis.",
"I should note, that the quality of the algorithm is in general similar to other methods considered (as expected).",
"However, while in some cases the CEN algorithm is slightly better, in other cases it appears to sufficiently loose, see for example left part of Figure 3(b) for MNIST data set.",
"It would be interesting to know the explanation.",
"Also, it would be interesting to have more examples of qualitative analysis to see, that the learned explanations are really useful.",
"I am a bit worried, that while we have interpretability with respect to intermediate features, these features theirselves might be very hard to interpret.",
"To sum up, I think that the general idea looks very natural and the results are quite supportive.",
"However, I don't feel myself confident enough in this area of research to make strong conclusion on the quality of the paper."
] | [
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"request",
"request",
"evaluation",
"evaluation",
"non-arg"
] |
S1kxi6OlM | [
"In general I find this to be a good paper and vote for acceptance. ",
"The paper is well-written and easy to follow. ",
"The proposed approach is a useful addition to existing literature.",
"Besides that I have not much to say except one point I would like to discuss: ",
"In 4.2 I am not fully convinced of using an adversial model for goal generation. ",
"RL algorithms generally suffer from poor stability ",
"and GANs themselves can have convergence issues. ",
"This imposes another layer of possible instability. ",
"Besides, generating useful reward function, while not trivial, can be seen as easier than solving the full RL problem. ",
"Can the authors argue why this model class was chosen over other, more simple, generative models? ",
"Furthermore, did the authors do experiments with simpler models?",
"Related: \"We found that the LSGAN works better than other forms of GAN for our problem.\" ",
"Was this improvement minor, or major, or didn't even work with other GAN types? ",
"This question is important, ",
"because for me the big question is if this model is universal and stable in a lot of applications or requires careful fine-tuning and monitoring."
] | [
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"request",
"request",
"quote",
"request",
"evaluation",
"evaluation"
] |
S1gH28vgM | [
"1) Summary This paper proposed a new method for predicting multiple future frames in videos. ",
"A new formulation is proposed where the frames’ inherent noise is modeled separate from the uncertainty of the future. ",
"This separation allows for directly modeling the stochasticity in the sequence through a random variable z ~ p(z) where the posterior q(z | past and future frames) is approximated by a neural network, ",
"and as a result, sampling of a random future is possible through sampling from the prior p(z) during testing. ",
"The random variable z can be modeled in a time-variant and time-invariant way. ",
"Additionally, this paper proposes a training procedure to prevent their method from ignoring the stochastic phenomena modeled by z. ",
"In the experimental section, the authors highlight the advantages of their method in 1) a synthetic dataset of shapes meant to clearly show the stochasticity in the prediction, 2) two robotic arm datasets for video prediction given and not given actions, and 3) A challenging human action dataset in which they perform future prediction only given previous frames.",
"2) Pros: + Novel/Sound future frame prediction formulation and training for modeling the stochasticity of future prediction.",
"+ Experiments on the synthetic shapes and robotic arm datasets highlight the proposed method’s power of multiple future frame prediction possible.",
"+ Good analysis on the number of samples improving the chance of outputting the correct future, the modeling power of the posterior for reconstructing the future, and a wide variety of qualitative examples.",
"+ Work is significant for the problem of modeling the stochastic nature of future frame prediction in videos.",
"3) Cons: Approximate posterior in non-synthetic datasets: The variable z seems to not be modeling the future very well. ",
"In the robot arm qualitative experiments, the robot motion is well modeled, however, the background is not. ",
"Given that for the approximate posterior computation the entire sequence is given (e.g. reconstruction is performed), ",
"I would expect the background motion to also be modeled well. ",
"This issue is more evident in the Human 3.6M experiments, ",
"as it seems to output blurriness regardless of the true future being observed. ",
"This problem may mean the method is failing to model a large variety of objects and clearly works for the robotic arm ",
"because a very similar large shape (e.g. robot arm) is seen in the training data. ",
"Do you have any comments on this?",
"Finn et al 2016 PNSR performance on Human 3.6M: ",
"Is the same exact data, pre-processing, training, and architecture being utilized? ",
"In her paper, the PSNR for the first timestep on Human 3.6M is about 41 (maybe 42?) while in this paper it is 38.",
"Additional evaluation on Human 3.6M: PSNR is not a good evaluation metric for frame prediction ",
"as it is biased towards blurriness, ",
"and also SSIM does not give us an objective evaluation in the sense of semantic quality of predicted frames. ",
"It would be good if the authors present additional quantitative evaluation to show that the predicted frames contain useful semantic information [1, 2, 3, 4]. ",
"For example, evaluating the predicted frames for the Human 3.6M dataset to see if the human is still detectable in the image or if the expected action is being predicted could be useful to verify that the predicted frames contain the expected meaningful information compared to the baselines.",
"Additional comments: Are all 15 actions being used for the Human 3.6M experiments? ",
"If so, the fact of the time-invariant model performs better than the time-variant one may not be the consistent action being performed (last sentence of 5.2). ",
"The motion performed by the actors in each action highly overlaps (talking on the phone action may go from sitting to walking a little to sitting again, and so on). ",
"Unless actions such as walking and discussion were only used, it is unlikely the time-invariant z is performing better because of consistent action. ",
"Do you have any comments on this?",
"4) Conclusion This paper proposes an interesting novel approach for predicting multiple futures in videos, ",
"however, the results are not fully convincing in all datasets. ",
"If the authors can provide additional quantitative evaluation besides PSNR and SSIM (e.g. evaluation on semantic quality), and also address the comments above, the current score will improve.",
"References: [1] Emily Denton and Vighnesh Birodkar. Unsupervised Learning of Disentangled Representations from Video. In NIPS, 2017.",
"[2] Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, and Honglak Lee. Learning to generate long-term future via hierarchical prediction. In ICML, 2017.",
"[3] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv preprint arXiv:1710.10196, 2017.",
"[4] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved Techniques for Training GANs. In NIPS, 2017."
] | [
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"non-arg",
"reference",
"request",
"fact",
"evaluation",
"fact",
"fact",
"request",
"request",
"non-arg",
"fact",
"evaluation",
"evaluation",
"non-arg",
"evaluation",
"evaluation",
"request",
"reference",
"reference",
"reference",
"reference"
] |
B1ZlEVXyf | [
"Summary ======== The authors present CLEVER, an algorithm which consists in evaluating the (local) Lipschitz constant of a trained network around a data point. ",
"This is used to compute a lower-bound on the minimal perturbation of the data point needed to fool the network.",
"The method proposed in the paper already exists for classical function, ",
"they only transpose it to neural networks. ",
"Moreover, the lower bound comes from basic results in the analysis of Lipschitz continuous functions.",
"Clarity ===== The paper is clear and well-written.",
"Originality ========= This idea is not new: ",
"if we search for \"Lipschitz constant estimation\" in google scholar, we get for example Wood, G. R., and B. P. Zhang. \"Estimation of the Lipschitz constant of a function.\" (1996)",
"which presents a similar algorithm (i.e., estimation of the maximum slope with reverse Weibull).",
"Technical quality ============== The main theoretical result in the paper is the analysis of the lower-bound on \\delta, the smallest perturbation to apply on a data point to fool the network. ",
"This result is obtained almost directly by writing the bound on Lipschitz-continuous function | f(y)-f(x) | < L || y-x || where x = x_0 and y = x_0 + \\delta.",
"Comments: - Lemma 3.1: why citing Paulavicius and Zilinskas for the definition of Lipschitz continuity? ",
"Moreover, a Lipschitz-continuous function does not need to be differentiable at all (e.g. |x| is Lipschitz with constant 1 but sharp at x=0). ",
"Indeed, this constant can be easier obtained if the gradient exists, ",
"but this is not a requirement.",
"- (Flaw?) Theorem 3.2 : This theorem works for fixed target-class ",
"since g = f_c - f_j for fixed g. ",
"However, once g = min_j f_c - f_j, this theorem is not clear with the constant Lq. ",
"Indeed, the function g should be g(x) = min_{k \\neq c} f_c(x) - f_k(x).",
"Thus its Lipschitz constant is different, potentially equal to L_q = max_{k} \\| L_q^k \\|, where L_q^k is the Lipschitz constant of f_c-f_k. ",
"If the theorem remains unchanged after this modification, you should clarify the proof. ",
"Otherwise, the theorem will work with the maximum over all Lipschitz constants but the theoretical result will be weakened.",
"- Theorem 4.1: I do not see the purpose of this result in this paper. ",
"This should be better motivated.",
"Numerical experiments ==================== Globally, the numerical experiments are in favor of the presented method. ",
"The authors should also add information about the time it takes to compute the bound, the evolution of the bound in function of the number of samples and the distribution of the relative gap between the lower-bound and the best adversarial example.",
"Moreover, the numerical experiments look to be realized in the context of targeted attack. ",
"To show the real effectiveness of the approach, the authors should also show the effectiveness of the lower-bound in the context of non-targeted attack."
] | [
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"request",
"fact",
"request",
"fact",
"evaluation",
"request",
"evaluation",
"request",
"evaluation",
"request"
] |
ByhgguzeM | [
"The paper presents a method to parametrize unitary matrices in an RNN as a Kronecker product of smaller matrices. ",
"Given N inputs and output, this method allows one to specify a linear transformation with O(log(N)) parameters, and perform a forward and backward pass in O(Nlog(N)) time. ",
"In addition a relaxation is performed allowing each constituent to deviate a bit from unitarity (“soft unitary constraint”).",
"The paper shows nice results on a number of small tasks. ",
"The idea is original to the best of my knowledge and is presented clearly.",
"I especially like the idea of “soft unitary constraint” which can be applied very efficiently in this factorized setup. ",
"I think this is the main contribution of this work.",
"However the paper in its current form has a number of problems:",
"- The authors state that a major shortcoming of previous (efficient) unitary RNN methods is the lack of ability to span the entire space of unitary matrices. ",
"This method presents a family that can span the entire space, but the efficient parts of this family (which give the promised speedup) only span a tiny fraction of it, ",
"as they require only O(log(N)) params to specify an O(N^2) unitary matrix. ",
"Indeed in the experimental section only those members are tested.",
"- Another claim that is made is that complex numbers are key, and again the argument is the need to span the entire space of unitary matrices, ",
"but the same comment still hold - that is not the space this work is really dealing with, ",
"and no experimental evidence is provided that using complex numbers was really needed.",
"- In the experimental section an emphasis is made as to how small the number of recurrent params are, ",
"but at the same time the input/output projections are very large, leaving the reader wondering if the workload simply shifted from the RNN to the projections. ",
"This needs to be addressed.",
"- Another aspect of the previous points is that it’s not clear if stacking KRU layers will work well. ",
"This is important ",
"as stacking LSTMs is a common practice. ",
"Efficient KRU span a restricted subspace whose elements might not compose into structures that are expressive enough. ",
"One way to overcome this potential problem is to add projection matrices between layers that will do some mixing, ",
"but this will blow the number of parameters. ",
"This needs to be explored.",
"- The authors claim that the soft unitary constraint was key for the success of the network, ",
"yet no details are provided as to how this constraint was applied, ",
"and no analysis was made for its significance."
] | [
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"request",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"request",
"fact",
"fact",
"fact"
] |
ryx2q7_eG | [
"This paper proposes for training a question answering model from answers only and a KB by learning latent trees that capture the syntax and learn the semantic of words, including referential terms like \"red\" and also compositional operators like \"not\".",
"I think this model is elegant, beautiful and timely.",
"The authors do a good job of explaining it clearly.",
"I like the modules of composition that seem to make a very intuitive sense for the \"algebra\" that is required and the parsing algorithm is clean.",
"However, I think that the evaluation is lacking, and in some sense the model exposes the weakness of the dataset that it uses for evaluation.",
"I have 2.5 major issues with the paper and a few minor comments:",
"Parsing: * The authors don't really say what is the base case for \\Psi that scores tokens",
"(unless I missed it and if indeed it is missing it really needs to be added)",
"and only provide the recursive case.",
"From that I understand that the only features that they use are whether a certain word makes sense in a certain position of the rule application in the context of the question.",
"While these features are based on Durrett et al.'s neural syntactic parser it seems like a pretty weak signal to learn from.",
"This makes me wonder, how does the parser learn whether one parse is better than the other?",
"Only based on this signal?",
"It makes me suspicious that the distribution of language is not very ambiguous and that as long as you can construct a tree in some context you can do it in almost any other context.",
"This is probably due to the fact that the CLEVR dataset was generated mostly using templates and is not really natural utterances produced by people.",
"Of course many people have published on CLEVR although of its language limitations,",
"but I was a bit surprised that only these features are enough to solve the problem completely,",
"and this makes me curious as to how hard is it to reverse-engineer the way that the language was generated with a context-free mechanism that is similar to how the data was produced.",
"* Related to that is that the decision for a score of a certain type t for a span (i,j) is the sum for all possible rule applications, rather than a max, which again means that there is no competition between different parse trees that result with the same type of a single span.",
"Can the authors say something about what the parser learns?",
"Does it learn to extract from the noise clear parse trees?",
"What is the distribution of rules in those sums?",
"is there some rule that is more preferred than others usually?",
"It seems like there is loss of information in the sum",
"and it is unclear what is the effect of that in the paper.",
"Evaluation: * Related to that is indeed the fact that they use CLEVR only.",
"There is now the Cornell NLVR dataset that is more challenging from a language perspective",
"and it would be great to have an evaluation there as well.",
"Also the authors only compare to 3 baselines where 2 don't even see the entire KB,",
"so the only \"real\" baseline is relation net.",
"The authors indeed state that it is state-of-the-art on clevr.",
"* It is worth noting that relation net is reported to get 95.5 accuracy while the authors have 89.4.",
"They use a subset so this might be the reason,",
"but I am not sure how they compared to relation net exactly.",
"Did they re-tune parameters once you have the new dataset?",
"This could make a difference in the final accuracy and cause an unfair advantage.",
"* I would really appreciate more analysis on the trees that one gets.",
"Are sub-trees interpretable?",
"Can one trace the process of composition?",
"This could have been really nice if one could do that.",
"The authors have a figure of a purported tree, but where does this tree come from?",
"From the mode?",
"Form the authors?",
"Scalability: * How much of a problem would it be to scale this?",
"Will this work in larger domains?",
"It seems they compute an attention score over every entity and also over a matrix that is squared in the number of entities.",
"So it seems if the number of entities is large that could be very problematic.",
"Once one moves to larger KBs it might become hard to maintain full differentiability which is one of the main selling points of the paper.",
"Minor comments: * I think the phrase \"attention\" is a bit confusing -",
"I thought of a distribution over entities at first.",
"* The feature function is not super clearly written I think - perhaps clarify in text a bit more what it does.",
"* I did not get how the denotation that is based on a specific rule applycation t_1 + t_2 --> t works.",
"Is it by looking at the grounding that is the result of that rule application?",
"* Authors say that the neural enquirer and neural symbolic machines produce flat programs -",
"that is not really true, the programs are just a linearized form of a tree,",
"so there is nothing very flat about it in my opinion.",
"Overall, I really enjoyed reading the paper,",
"but I was left wondering whether the fact that it works so well mostly attests to the way the data was generated and am still wondering how easy it would be to make this work in for more natural language or when the KB is large."
] | [
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"non-arg",
"fact",
"request",
"fact",
"fact",
"evaluation",
"non-arg",
"non-arg",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"request",
"non-arg",
"non-arg",
"non-arg",
"evaluation",
"evaluation",
"fact",
"fact",
"request",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"non-arg",
"non-arg",
"fact",
"request",
"non-arg",
"non-arg",
"evaluation",
"non-arg",
"non-arg",
"non-arg",
"non-arg",
"non-arg",
"fact",
"evaluation",
"evaluation",
"evaluation",
"non-arg",
"evaluation",
"evaluation",
"non-arg",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation"
] |
BJ0qmr9xf | [
"The paper solves the problem of how to do autonomous resets, ",
"which is an important problem in real world RL. ",
"The method is novel, ",
"the explanation is clear, ",
"and has good experimental results.",
"Pros: 1. The approach is simple, solves a task of practical importance, and performs well in the experiments. ",
"2. The experimental section performs good ablation studies wrt fewer reset thresholds, reset attempts, use of ensembles.",
"Cons: 1. The method is evaluated only for 3 tasks, which are all in simulation, and on no real world tasks. ",
"Additional tasks could be useful, especially for qualitative analysis of the learned reset policies.",
"2. It seems that while the method does reduce hard resets, ",
"it would be more convincing if it can solve tasks which a model without a reset policy couldnt. ",
"Right now, the methods without the reset policy perform about equally well on final reward.",
"3. The method wont be applicable to RL environments where we will need to take multiple non-invertible actions to achieve the goal (an analogy would be multiple levels in a game). ",
"In such situations, one might want to use the reset policy to go back to intermediate “start” states from where we can continue again, rather than the original start state always.",
"Conclusion/Significance: The approach is a step in the right direction, ",
"and further refinements can make it a significant contribution to robotics work."
] | [
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"request",
"fact",
"evaluation",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation"
] |
ry07SzQgG | [
"This paper investigates human priors for playing video games.",
"Considering a simple video game, where an agent receives a reward when she completes a game board, this paper starts by stating that: -\tFirstly, the humans perform better than an RL agent to complete the game board.",
"-\tSecondly, with a simple modification of textures the performances of human players collapse, while those of a RL agent stay the same.",
"If I have no doubts about these results, I have a concern about the method. ",
"In the case of human players the time needed to complete the game is plotted, ",
"and in the case of a RL agent the number of steps needed to complete the game is plotted (fig 1). ",
"Formally, we cannot conclude that one minute is lesser than 4 million of steps.",
"This issue could be easily fixed. ",
"Unfortunately, I have other concerns about the method and the conclusions.",
"For instance, masking where objects are or suppressing visual similarity between similar objects should also deteriorate the performance of a RL agent. ",
"So it cannot be concluded that the change of performances is due to human priors. ",
"In these cases, I think that the change of performances is due to the increased difficulty of the game.",
"The authors have to include RL agent in all their experiments to be able to dissociate what is due to human priors and what is due to the noise introduced in the game."
] | [
"fact",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"request"
] |
S1ZbRMqlM | [
"The paper suggests taking GloVe word vectors, adjust them, and then use a non-Euclidean similarity function between them.",
"The idea is tested on very small data sets (80 and 50 examples, respectively).",
"The proposed techniques are a combination of previously published steps,",
"and the new algorithm fails to reach state-of-the-art on the tiny data sets.",
"It isn't clear what the authors are trying to prove,",
"nor whether they have successfully proven what they are trying to prove.",
"Is the point that GloVe is a bad algorithm?",
"That these steps are general?",
"If the latter, then the experimental results are far weaker than what I would find convincing.",
"Why not try on multiple different word embeddings?",
"What happens if you start with random vectors?",
"What happens when you try a bigger data set or a more complex problem?"
] | [
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"non-arg",
"non-arg",
"evaluation",
"request",
"non-arg",
"non-arg"
] |
BympCwwgf | [
"This paper presents a method to cope with adversarial examples in classification tasks, leveraging a generative model of the inputs.",
"Given an accurate generative model of the input, this approach first projects the input onto the manifold learned by the generative model",
"(the idea being that inputs on this manifold reflect the non-adversarial input distribution).",
"This projected input is then used to produce the classification probabilities.",
"The authors test their method on various adversarially constructed inputs (with varying degrees of noise).",
"Questions/Comments: - I am interested in unpacking the improvement of Defense-GAN over the MagNet auto-encoder based method.",
"Is the MagNet auto-encoder suffering lower accuracy because the projection of an adversarial image is based on an encoding function that is learned only on true data?",
"If the decoder from the MagNet approach were treated purely as a generative model, and the same optimization-based projection approach (proposed in this work) was followed, would the results be comparable?",
"- Is there anything special about the GAN approach, versus other generative approaches?",
"- In the black-box vs. white-box scenarios, can the attacker know the GAN parameters?",
"Is that what is meant by the \"defense network\" (in experiments bullet 2)?",
"- How computationally expensive is this approach take compared to MagNet or other adversarial approaches?",
"Quality: The method appears to be technically correct.",
"Clarity: This paper clearly written;",
"both method and experiments are presented well.",
"Originality: I am not familiar enough with adversarial learning to assess the novelty of this approach.",
"Significance: I believe the main contribution of this method is the optimization-based approach to project onto a generative model's manifold.",
"I think this kernel has the potential to be explored further (e.g. computational speed-up, projection metrics)."
] | [
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"non-arg",
"non-arg",
"non-arg",
"non-arg",
"non-arg",
"non-arg",
"evaluation",
"evaluation",
"evaluation",
"non-arg",
"evaluation",
"evaluation"
] |
B1g5pBTxz | [
"The article \"Do GANs Learn the Distribution? Some Theory and Empirics\" considers the important problem of quantifying whether the distributions obtained from generative adversarial networks come close to the actual distribution of images.",
"The authors argue that GANs in fact generate the distributions with fairly low support.",
"The proposed approach relies on so-called birthday paradox",
"which allows to estimate the number of objects in the support by counting number of matching (or very similar) pairs in the generated sample.",
"This test is expected to experimentally support the previous theoretical analysis by Arora et al. (2017).",
"The further theoretical analysis is also performed showing that for encoder-decoder GAN architectures the distributions with low support can be very close to the optimum of the specific (BiGAN) objective.",
"The experimental part of the paper considers the CelebA and CIFAR-10 datasets.",
"We definitely see many very similar images in fairly small sample generated.",
"So, the general claim is supported.",
"However, if you look closely at some pictures, you can see that they are very different though reported as similar.",
"For example, some deer or truck pictures.",
"That's why I would recommend to reevaluate the results visually,",
"which may lead to some change in the number of near duplicates and consequently the final support estimates.",
"To sum up, I think that the general idea looks very natural and the results are supportive.",
"On theoretical side, the results seem fair (though I didn't check the proofs)",
"and, being partly based on the previous results of Arora et al. (2017), clearly make a step further."
] | [
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"fact",
"fact",
"request",
"evaluation",
"evaluation",
"evaluation",
"evaluation"
] |
r1RTd8hgG | [
"The proposed method is a classifier that is fair and works in collaboration with an unfair (but presumably accurate model). ",
"The novel classifier is the result of the optimisation of a loss function ",
"(composed of a part similar to a logistic regression model and a part being the disparate impact). ",
"Hence, it can be interpreted as a logistic loss with a fairness regularisation.",
"The results are promising and the applications are very important for the acceptance of ML approaches in the society.",
"
However, I believe that the model could be made more general (than a fairness regularized logistic loss) and its theoretical properties studied.",
"Finally, this paper used uncommon vocabulary (for the machine learning community) ",
"and it make is difficult to follow sometimes (for example, the use of a Decision-Maker entity).",
"When reading the submitted paper, it was unclear (until section 6) how deferring could help fairness. ",
"Hence, the structure of the paper could maybe be improved by introducing the cost function earlier in the manuscript (as a fairness regularised loss).",
"To conclude, although the application is of high interest and the numerical results encouraging, ",
"the methodological approach does not seem to be very novel.",
"Minor comment : - The list of authors of the reference “Machine bias : theres software…” apperars incorrectly (some comma may be missing in the .bib file) ",
"and there is a small typo in the title.",
"Possible extensions :- The proposed fairness aware loss could be made more general (and not only in the case of a logistic model) ",
"- It could also be generalised to a mixture of biased classifier (more than on DM)."
] | [
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"request",
"evaluation",
"evaluation",
"fact",
"fact",
"request",
"request"
] |
Hk6aJkmWM | [
"This paper proposes to use RGANs and RCGANS to generate synthetic sequences of actual data. ",
"They demonstrate the quality of the sequences on sine waves, MNIST, and ICU telemetry data.",
"The authors demonstrate novel approaches for generating real-valued sequences using adversarial training, a train on synthetic, test of real and vice versa method for evaluating GANS, generating synthetic medical time series data, and an empirical privacy analysis. ",
"Major - the medical use case is not motivating. ",
"de-identifying the 4 telemetry measures is extremely easy ",
"and there is little evidence to show that it is even possible to reidentify individuals using these 4 measures. ",
"our institutional review board would certainly allow self-certification of the data (i.e. removing the patient identifiers and publishing the first 4 hours of sequences).",
"- the labels selected by the authors for the icu example are to forecast the next 15 minutes and whether a critical value is reached. ",
"Please add information about how this critical value was generated. ",
"Also it would be very useful to say that a physician was consulted and that the critical values were \"clinically\" useful.",
"- the changes in performance of TSTR are large enough that I would have difficulty trusting any experiments using the synthetic data. ",
"If I optimized a method using this synthetic data, I would still need to assess the result on real data.",
"- In addition it is unclear whether this synthetic process would actually generate results that are clinically useful. ",
"The authors certainly make a convincing statement about the internal validity of the method. ",
"An externally valid measure would strengthen the results. ",
"I'm not quite sure how the authors could externally validate the synthetic data ",
"as this would also require generating synthetic outcome measures. ",
"I think it would be possible for the synthetic sequence to also generate an outcome measure (i.e. death) based on the first 4 hours of stay.",
"Minor- write in the description for table 1 what task the accuracies correspond.",
"Summary The authors present methods for generating synthetic sequences. ",
"The MNIST example is compelling. ",
"However the ICU example has some pitfalls which need to be addressed."
] | [
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"non-arg",
"fact",
"request",
"request",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"request",
"fact",
"evaluation",
"evaluation"
] |
rJOVWxjez | [
"The authors describe a new defense mechanism against adversarial attacks on classifiers (e.g., FGSM).",
"They propose utilizing Generative Adversarial Networks (GAN),",
"which are usually used for training generative models for an unknown distribution,",
"but have a natural adversarial interpretation.",
"In particular, a GAN consists of a generator NN G which maps a random vector z to an example x, and a discriminator NN D which seeks to discriminate between an examples produced by G and examples drawn from the true distribution.",
"The GAN is trained to minimize the max min loss of D on this discrimination task, thereby producing a G (in the limit) whose outputs are indistinguishable from the true distribution by the best discriminator.",
"Utilizing a trained GAN, the authors propose the following defense at inference time.",
"Given a sample x (which has been adversarially perturbed), first project x onto the range of G by solving the minimization problem z* = argmin_z ||G(z) - x||_2.",
"This is done by SGD.",
"Then apply any classifier trained on the true distribution on the resulting x* = G(z*).",
"In the case of existing black-box attacks, the authors argue (convincingly) that the method is both flexible and empirically effective.",
"In particular, the defense can be applied in conjunction with any classifier (including already hardened classifiers), and does not assume any specific attack model.",
"Nevertheless, it appears to be effective against FGSM attacks, and competitive with adversarial training specifically to defend against FGSM.",
"The authors provide less-convincing evidence that the defense is effective against white-box attacks.",
"In particular, the method is shown to be robust against FGSM, RAND+FGSM, and CW white-box attacks.",
"However, it is not clear to me that the method is invulnerable to novel white-box attacks.",
"In particular, it seems that the attacker can design an x which projects onto some desired x* (using some other method entirely), which then fools the classifier downstream.",
"Nevertheless, the method is shown to be an effective tool for hardening any classifier against existing black-box attacks",
"(which is arguably of great practical value).",
"It is novel and should generate further research with respect to understanding its vulnerabilities more completely.",
"Minor Comments: The sentence starting “Unless otherwise specified…” at the top of page 7 is confusing given the actual contents of Tables 1 and 2, which are clarified only by looking at Table 5 in the appendix.",
"This should be fixed."
] | [
"fact",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"evaluation",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"request"
] |
SJxF3VsxG | [
"This paper describes computationally efficient methods for training adversarially robust deep neural networks for image classification.",
"(These methods may extend to other machine learning models and domains as well, but that's beyond the scope of this paper.)",
"The former standard method for generating adversarially images quickly and using them in training was to do a single gradient step to increase the loss of the true label or decrease the loss of an alternate label.",
"This paper shows that such training methods only lead to robustness against these \"weak\" adversarial examples, leaving the adversarially-trained models vulnerable to multi-step white-box attacks and black-box attacks (adversarial examples generated to attack alternate models).",
"There are two proposed solutions.",
"The first is to generate additional adversarial examples from other models and use them in training.",
"This seems to yield robustness against black-box attacks from held-out models as well.",
"Of course, it requires that you have a somewhat diverse group of models to choose from.",
"If that's the case, why not directly build an ensemble of all the models?",
"An ensemble of neural networks can still be represented as a neural network, although a more computationally costly one.",
"Thus, while this heuristic appears to be useful with current models against current attacks,",
"I don't know how well it will hold up in the future.",
"The second solution is to add random noise before taking the gradient step.",
"This yields more effective adversarial examples, both for attacking models and for training,",
"because it relies less on the local gradient.",
"This is another simple idea that appears to be effective.",
"However, I would be interested to see a comparison to a 2-step gradient-based attack.",
"R+Step-LL can be viewed as a 2-step attack: a random step followed by a gradient step.",
"What if both steps were gradient steps instead?",
"This interpolates between Step-LL and I-Step-LL, with an intermediate computational cost.",
"It would be very interesting to know if R+Step-LL is more or less effective than 2+Step-LL, and how large the difference is.",
"I like that this paper demonstrates the weakness of previous methods, including extensive experiments and a very nice visualization of the loss landscape in two adversarial dimensions.",
"The proposed heuristics seem effective in practice,",
"but they're somewhat ad hoc",
"and there is no analysis of how these heuristics might or might not be vulnerable to future attacks."
] | [
"fact",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"request",
"fact",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"request",
"evaluation",
"non-arg",
"fact",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact"
] |
r1rOlgOlz | [
"Authors describe a procedure of building their production recommender system from scratch, begining with formulating the recommendation problem, label data formation, model construction and learning. ",
"They use several different evaluation techniques to show how successful their model is (offline metrics, A/B test results, etc.)",
"Most of the originality comes from integrating time decay of purchases into the learning framework. ",
"Rest of presented work is more or less standard.",
"Paper may be useful to practitioners who are looking to implement something like this in production."
] | [
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation"
] |
rJBLYC--f | [
"The paper proposes a novel approach on estimating the parameters \\nof Mean field games (MFG).",
"The key of the method is a reduction of the unknown parameter MFG to an unknown parameter Markov Decision Process (MDP).\\n\\n",
"This is an important class of models",
"and I recommend the acceptance of the paper.\\n\\n",
"I think that the general discussion about the collective behavior application should be more carefully presented",
"and some better examples of applications should be easy to provide.",
"In addition the authors may want to enrich their literature review",
"and give references to alternative work on unknown MDP estimation methods cf. [1], [2] below. \\n\\n",
"[1] Burnetas, A. N., & Katehakis, M. N. (1997). Optimal adaptive policies for Markov decision processes. Mathematics of Operations Research, 22(1), 222-255.\\n\\n",
"[2] Budhiraja, A., Liu, X., & Shwartz, A. (2012). Action time sharing policies for ergodic control of Markov chains. SIAM Journal on Control and Optimization, 50(1), 171-195."
] | [
"fact",
"fact",
"evaluation",
"evaluation",
"request",
"request",
"request",
"request",
"reference",
"reference"
] |
B1LfYs_gf | [
"This paper proposes to use 3D conditional GAN models to generate fMRI scans. ",
"Using the generated images, paper reports improvement in classification accuracy on various tasks.",
"One claim of the paper is that a generative model of fMRI data can help to caracterize and understand variability of scans across subjects.",
"Article is based on recent works such as Wasserstein GANs and AC-GANs by (Odena et al., 2016).",
"Despite the rich literature of this recent topic ",
"the related work section is rather convincing.",
"Model presented extends IW-GAN by using 3D convolution and also by supervising the generator using sample labels.",
"Major: - The size of the generated images is up to 26x31x22 ",
"which is limited (about half the size of the actual resolution of fMRI data). ",
"As a consequence results on decoding learning task using low resolution images can end up worse than with the actual data (as pointed out).",
"What it means is that the actual impact of the work is probably limited.",
"- Generating high resolution images with GANs even on faces for which there is almost infinite data is still a challenge. ",
"Here a few thousand data points are used. ",
"So it raises too concerns: First is it enough?",
"Using so-called learning curves is a good way to answer this. ",
"Second is what are the contributions to the state-of-the-art of the 2 methods introduced? ",
"Said differently, as there is no classification results using images produced by an another GAN architecture ",
"it is hard to say that the extra complexity proposed here (which is a bit contribution of the work) is actually necessary.",
"Minor: - Fonts in figure 4 are too small."
] | [
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"fact",
"fact",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"request"
] |
Hy4cMGVlf | [
"The authors build on the work of Tang et al. (2017), ",
"who made a minor change to the skip-thought model by decoding only the next sentence, rather than the previous one also. ",
"The additional minor change in this paper is to use a CNN, rather than RNN, decoder.",
"I am sympathetic to the goals of the work, and believe this sort of work should be carried out, ",
"but I see the contribution as too minor to constitute a paper at the conference track of a leading international conference such as ICLR. ",
"Given the incremental nature of the work, I think this would be a good fit for something like a short paper at *ACL.",
"I found the more theoretical motivation of the CNN decoder not terribly convincing, and somewhat post-hoc. ",
"I feel as though analogous arguments could just as easily be made for an RNN decoder.",
" Ultimately I see these questions - such as CNN vs. RNN for the decoder - as empirical ones.",
"Finally, the authors have admirably attempted a thorough comparison with existing work, in the related work section, ",
"but this section takes up a large chunk of the paper at the end, ",
"and again I would have preferred this section to be much shorter and more concise.",
"Summary: worthwhile empirical goal, ",
"but the paper could have been easily written using half as much space."
] | [
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"request",
"evaluation",
"evaluation"
] |
ryU7ZMsgf | [
"This paper presents a reparametrization of the perturbation applied to features in adversarial examples based attacks. ",
"It tests this attack variation on against Inception-family classifiers on ImageNet. ",
"It shows some experimental robustness to JPEG encoding defense.",
"Specifically about the method: Instead of perturbating a feature x_i by delta_i, as in other attacks, with delta_i in range [-Delta_i, Delta_i], they propose to perturbate x_i^*, which is recentered in the domain of x_i through a heuristic ((x_i ± Delta_i + domain boundary that would be clipped)/2), and have a similar heuristic for computing a Delta_i^*. ",
"Instead of perturbating x_i^* directly by delta_i, they compute the perturbed x by x_i^* + Delta_i^* * g(r_i), ",
"so they follow the gradient of loss to misclassify w.r.t. r (instead of delta). ",
"+/-: + The presentation of the method is clear.",
"+ ImageNet is a good dataset to benchmark on.",
"- (!) The (ensemble) white-box attack is effective ",
"but the results are not compared to anything else, e.g. it could be compared to (vanilla) FGSM nor C&W.",
"- The other attack demonstrated is actually a grey-box attack, ",
"as 4 out of the 5 classifiers are known, they are attacking the 5th, ",
"but in particular all the 5 classifiers are Inception-family models.",
"- The experimental section is a bit sloppy at times (e.g. enumerating more than what is actually done, starting at 3.1.1.).",
"- The results on their JPEG approximation scheme seem too explorative (early in their development) to be properly compared.",
"I think that the paper need some more work, in particular to make more convincing experiments that the benefit lies in CIA (baselines comparison), and that it really is robust across these defenses shown in the paper."
] | [
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"request"
] |
H1k_ZpFlf | [
"Summary: The paper proposes to learn new priors for latent codes z for GAN training.",
"for this the paper shows that there is a mismatch between the gaussian prior and an estimated of the latent codes of real data by reversal of the generator .",
"To fix this the paper proposes to learn a second GAN to learn the prior distributions of \"real latent code\" of the first GAN.",
"The first GAN then uses the second GAN as prior to generate the z codes.",
"Quality/clarity: The paper is well written and easy to follow.",
"Originality:pros: -The paper while simple sheds some light on important problem with the prior distribution used in GAN.",
"- the second GAN solution trained on reverse codes from real data is interesting",
"- In general the topic is interesting, the solution presented is simple but needs more study",
"cons: - It related to adversarial learned inference and BiGAN, in term of learning the mapping z ->x, x->z and seeking the agreement.",
"- The solution presented is not end to end",
"(learning a prior generator on learned models have been done in many previous works on encoder/decoder)",
"General Review: More experimentation with the latent codes will be interesting:",
"- Have you looked at the decay of the singular values of the latent codes obtained from reversing the generator?",
"Is this data low rank?",
"how does this change depending on the dimensionality of the latent codes?",
"Maybe adding plots to the paper can help.",
"- the prior agreement score is interesting",
"but assuming gaussian prior also for the learned latent codes from real data is maybe not adequate.",
"Maybe computing the entropy of the codes using a nearest neighbor estimate of the entropy can help understanding the entropy difference wrt to the isotropic gaussian prior?",
"- Have you tried to multiply the isotropic normal noise with the learned singular values and generate images from this new prior and compute inceptions scores etc?",
"Maybe also rotating the codes with the singular vector matrix V or \\Sigma^{0.5} V?",
"- What architecture did you use for the prior generator GAN?",
"- Have you thought of an end to end way to learn the prior generator GAN?"
] | [
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"request",
"request",
"request",
"request",
"request",
"evaluation",
"evaluation",
"request",
"request",
"request",
"request",
"request"
] |
HJ9LXfvlz | [
"Paper studies an interesting phenomenon of overparameterised models being able to learn well-generalising solutions.",
"It focuses on a setting with three crucial simplifications:",
"- data is linearly separable",
"- model is 1-hidden layer feed forward network with homogenous activations",
"- **only input-hidden layer weights** are trained, while the hidden-output layer's weights are fixed to be (v, v, v, ..., v, -v, -v, -v, ..., -v) (in particular -- (1,1,...,1,-1,-1,...,-1))",
"While the last assumption does not limit the expressiveness of the model in any way,",
"as homogenous activations have the property of f(ax)=af(x) (for positive a)",
"and so for any unconstrained model in the second layer, we can \"propagate\" its weights back into first layer and obtain functionally equivalent network.",
"However, learning dynamics of a model of form z(x) = SUM( g(Wx+b) ) - SUM( g(Vx+c) ) + d and \"standard\" neural model z(x) = Vg(Wx+b)+c can be completely different.",
"Consequently, while the results are very interesting, claiming their applicability to the deep models is (at this point) far fetched.",
"In particular, abstract suggests no simplifications are being made, which does not correspond to actual result in the paper.",
"The results themselves are interesting,",
"but due to the above restriction it is not clear whether it sheds any light on neural nets, or simply described a behaviour of very specific, non-standard shallow model.",
"I am happy to revisit my current rating given authors rephrase the paper so that the simplifications being made are clear both in abstract and in the text, and that (at least empirically) it does not affect learning in practice.",
"In other words - all the experiments in the paper follow the assumption made, if authors claim is that the restriction introduced does not matter,but make proofs too technical - at least experimental section should show this.",
"If the claims do not hold empirically without the assumptions made, then the assumptions are not realistic and cannot be used for explaining the behaviour of models we are interested in.",
"Pros: - tackling a hard problem of overparametrised models, without introducing common unrealistic assumptions of activations independence",
"- very nice result of \"phase change\" dependend on the size of hidden layer in section 7",
"Cons: - simplification with non-trainable second layer is currently not well studied in the paper;",
"and while not affecting expressive power - it is something that can change learning dynamics completely"
] | [
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"request",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation"
] |
HkeOU0qgf | [
"The author unveils some properties of the resnets, for example, the cosine loss and l2 ratio of the layers. ",
"I think the author should place more focus to study \"real\" iterative inference with shared parameters rather than analyzing original resnets.",
"In resnet without sharing parameters, it is quite ambiguous to say whether it is doing representation learning or iterative refinement.",
"1. The cosine loss is not meaningful in the sense that the classification layer is trained on the output of the last residual block and fixed. ",
"Moving the classification layer to early layers will definitely result in accuracy loss. ",
"Even in non-residual network, we can always say that the vector h_{i+1} - h_i is refining h_i towards the negative gradient direction. ",
"The motivation of iterative inference would be to generate a feature that is easier to classify rather than to match the current fixed classifier. ",
"Thus the final classification layer should be retrained for every addition or removal of residual blocks.",
"2. The l2 ratio. The l2 ratio is small for higher residual layers, I'm not sure how much this phenomenon can prove that resnet is actually doing iterative inference.",
"3. In section 4.4 it is shown that unrolling the layers can improve the performance of the network. ",
"However, the same can be achieved by adding more unshared layers. ",
"I think the study should focus more on whether shared or unshared is better.",
"4. Section 4.5 is a bit weak in experiments, ",
"my conclusion is that currently it is still limited by batch normalization and optimization, ",
"the evidence is still not strong enough to show that iterative inference is advantageous / disadvantageous.",
"The the above said, I think the more important thing is how we can benefit from iterative inference interpretation, which is relatively weak in this paper."
] | [
"fact",
"request",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"request",
"evaluation",
"fact",
"evaluation",
"request",
"evaluation",
"evaluation",
"evaluation",
"evaluation"
] |
SJw9gV2ZM | [
"This paper draws an interesting connection between deep neural networks and theories of quantum entanglement.",
"They leveraged the tool for analyzing quantum entanglement to deep neural networks,",
"and proposed a graph theoretical analysis for neural networks.",
"They demonstrated how their theory can help designing neural network architectures on the MNIST dataset.",
"I think the theoretical findings are novel",
"and may contribute to the important problem on understanding neural networks theoretically.",
"I am not familiar with the theory for quantum entanglement though."
] | [
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"fact"
] |
BkEcWHKlf | [
"Pros: 1. It provided theoretic analysis why larger feature norm is preferred in feature representation learning.",
"2. A new regularization method (feature incay) is proposed.",
"Cons: It seems there is not much comparison between this proposed method and the concurrent work",
"\"COCO(Liu et al. (2017c))\"."
] | [
"fact",
"fact",
"fact",
"reference"
] |
SJTAcW5xf | [
"This paper describes a method for computing representations for out-of-vocabulary words, e.g. based on their spelling or dictionary definitions. ",
"The main difference from previous approaches is that the model is that the embeddings are trained end-to-end for a specific task, rather than trying to produce generically useful embeddings. ",
"The method leads to better performance than using no external resources, but not as high performance as using Glove embeddings. ",
"The paper is clearly written, and has useful ablation experiments. ",
"However, I have a couple of questions/concerns: - Most of the gains seem to come from using the spelling of the word. ",
"As the authors note, this kind of character level modelling has been used in many previous works. ",
"- I would be slightly surprised if no previous work has used external resources for training word representations using an end-task loss, ",
"but I don’t know the area well enough to make specific suggestions ",
"- I’m a little skeptical about how often this method would really be useful in practice. ",
"It seems to assume that you don’t have much unlabelled text (or you’d use Glove), ",
"but you probably need a large labelled dataset to learn how to read dictionary definitions well. ",
"All the experiments use large tasks ",
"- it would be helpful to have an experiment showing an improvement over character-level modelling on a smaller task.",
"- The results on SQUAD seem pretty weak - 52-64%, compared to the SOTA of 81. ",
"It seems like the proposed method is quite generic, ",
"so why not apply it to a stronger baseline?"
] | [
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"request",
"evaluation",
"evaluation",
"request"
] |
Hk2dO8ngz | [
"This very well written paper covers the span between W-GAN and VAE.",
"For a reviewer who is not an expert in the domain, it reads very well,",
"and would have been of tutorial quality if space had allowed for more detailed explanations.",
"The appendix are very useful, and tutorial paper material (especially A).",
"While I am not sure description would be enough to reproduce and no code is provided, every aspect of the architecture, if not described, if referred as similar to some previous work.",
"There are also some notation shortcuts (not explained) in the proof of theorems that can lead to initial confusion, but they turn out to be non-ambiguous.",
"One that could be improved is P(P_X, P_G) where one loses the fact that the second random variable is Y.",
"This work contains plenty of novel material, which is clearly compared to previous work:",
"- The main consequence of the use of Wasserstein distance is the surprisingly simple and useful Theorem 1.",
"I could not verify its novelty, but this seems to be a great contribution.",
"- Blending GAN and auto-encoders has been tried in the past,",
"but the authors claim better theoretical foundations that lead to solutions that do not rquire min-max",
"- The use of MMD in the context of GANs has also been tried.",
"The authors claim that their use in the latent space makes it more practival",
"The experiments are very convincing, both numerically and visually.",
"Source of confusion: in algorithm 1 and 2, \\tilde{z} is \"sampled\" from Q_TH(Z|xi),",
"some one is lead to believe that this is the sampling process as in VAEs, while in reality Q_TH(Z|xi) is deterministic in the experiments."
] | [
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"request",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation"
] |
BkviGptxG | [
"This paper presents an alternative approach to constructing variational lower bounds on data log likelihood in deep, directed generative models with latent variables.",
"Specifically, the authors propose using approximate posteriors shared across groups of examples, rather than posteriors which treat examples independently.",
"The group-wise posteriors allow amortization of the information cost KL(group posterior || prior) across all examples in the group,",
"which the authors liken to the \"KL annealing\" tricks that are sometimes used to avoid posterior collapse when training models with strong decoders p(x|z) using current techniques for approximate variational inference in deep nets.",
"The presentation of the core idea is solid,",
"though it did take two read-throughs before the equations really clicked for me.",
"I think the paper could be improved by spending more time on a detailed description of the model for the Omniglot experiments (as illustrated in Figure 3).",
"E.g., explicitly describing how group-wise and per-example posteriors are composed in this model, using Equations and pseudo-code for the main training loop, would have saved me some time.",
"For readers less familiar with amortized variational inference in deep nets, the benefit would be larger.",
"I appreciate that the authors developed extensions of the core method to more complex group structures,",
"though I didn't find the related experiments particularly convincing.",
"Overall, I like this paper",
"and think the underlying group-wise posterior construction trick is worth exploring further.",
"Of course, the elephant in the room is how to determine the groups across which the posteriors can be shared and their information costs amortized."
] | [
"fact",
"fact",
"fact",
"fact",
"evaluation",
"non-arg",
"request",
"request",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation"
] |
SylxFWcgG | [
"This paper extends an existing thread of neural computation research focused on learning resuable subprocedures (or options in RL-speak). ",
"Instead of simply input and output examples, as in most of the work in neural computation, they follow in the vein of the Neural Programmer-Interpreter (Reed and de Freitas, 2016) and Li et. al., 2017, ",
"where the supervision contains the full sequence of elementary actions in the domain for all samples, and some samples also contain the hierarchy of subprocedure calls.",
"The main focus of their work is learning from fewer fully annotated samples than prior work. ",
"They introduce two new ideas in order to enable this:",
"1. They limit the memory state of each level in the program heirarchy to simply a counter indicating the number of elementary actions/subprocedure calls taken so far (rather than a full RNN embedded hidden/cell state as in prior work). ",
"They also limit the subprocedures such that they do not accept any arguments.",
"2. By considering this very limited set of possible hidden states, they can compute the gradients using a dynamic program that seems to be more accurate than the approximate dynamic program used in Li et. al., 2017. ",
"The main limitation of the work is this extremely limited memory state, and the lack of arguments. ",
"Without arguments, everything that parameterizes the subprocedures must be in the visible world state. ",
"In both of their domains, this is true, ",
"but this places a significant limitation on the algorithms which can be modeled with this technique. ",
"Furthermore, the limited memory state means that the only way a subprocedure can remember anything about the current observation is to call a different subprocedure. ",
"Again, their two evalation tasks fit into this paradigm, ",
"but this places very significant limitations on the set of applicable domains. ",
"I would have like to see more discussion on how constraining these limitations would be in practice. ",
"For example, it seems it would be impossible for this architecture to perform the Nanocraft task if the parameters of the task (width, height, etc.) were only provided in the first observation, rather than every observation. ",
"None-the-less I think this work is an important step in our understanding of the learning dynamics for neural programs. ",
"In particular, while the RNN hidden state memory used by the prior work enables the learning of more complicted programs *in theory*, this has not been shown in practice. ",
"So, it's possible that all the prior work is doing is learning to approixmate a much simpler architecture of this form. ",
"Specifically, I think this work can act as a great base-line by forcing future work to focus on domains which cannot be easily solved by a simpler architecture of this form. ",
"This limitation will also force the community to think about which tasks require a more complicated form of memory, and which can be solved with a very simple memory of this form.",
"I also have the following additional concerns about the paper: 1. I found the current explanation of the algorithm to be very difficult to understand. ",
"It's extremely difficult to understand the core method without reading the appendix, ",
"and even with the appendix I found the explanation of the level-by-level decomposition to be too terse.",
"2. It's not clear how their gradient approximation compares to the technique used by Li et. al. ",
"They obviously get better results on the addition and Nanocraft domains, ",
"but I would have liked a more clear explanation and/or some experiments providing insights into what enables these improvements (or at least an admission by the authors that they don't really understand what enabled the performance improvements)."
] | [
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"evaluation",
"request",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"request"
] |
HJmMNVDlz | [
"This paper proposes a new model for the general task of inducing document representations (embeddings).",
"The approach uses a CNN architecture, distinguishing it from the majority of prior efforts on this problem, which have tended to use RNNs.",
"This affords obvious computational advantages, as training may be parallelized.",
"Overall, the model presented is relatively simple (a good thing, in my view) and it indeed seems fast.",
"I can thus see potential practical uses of this CNN based approach to document embedding in future work on language tasks.",
"The training strategy, which entails selecting documents and then indexes within them stochastically, is also neat.",
"Furthermore, the work is presented relatively clearly.",
"That said, my main concerns regarding this paper are that: (1) there's not much new here, and,",
"(2) the experimental setup may be flawed,",
"in that it would seem model hyperparams were tuned for the proposed approach but not for the baselines;",
"I elaborate on these concerns below.",
"Specific comments:---- It's hard to tease out exactly what's new here:",
"the various elements used are all well known.",
"But perhaps there is merit in putting the specific pieces together.",
"Essentially, the novelty is using a CNN rather than an RNN to induce document embeddings.",
"- In Section 4.1, the authors write that they report results for their after running \"parameter sweeps ...\" --",
"I presume that these were performed on a validation set,",
"but the authors should say so.",
"In any case, a very potential weakness here: were analagous parameter sweeps for this dataset performed for the baseline models?",
"It would seem not, as the authors write \"the IMDB training data using the default hyper-parameters\" for skip-thought.",
"Surely it is unfair comparison if one model has been tuned to a given dataset while others use only the default hyper-parameters?",
"- Many important questions were left unaddressed in the experiments.",
"For example, does one really need to use the gating mechanism borrowed from the Dauphin et al. paper?",
"What happens if not?",
"How big of an effect does the stochastic sampling of document indices have on the learned embeddings?",
"Does the specific underlying CNN architecture affect results, and how much?",
"None of these questions are explored.",
"- I was left a bit confused regarding how the v_{1:i-1} embedding is actually estimated;",
"I think the details here are insufficient in the current presentation.",
"The authors write that this is a \"function of all words up to w_{i-1}\".",
"This would seem to imply that at test time, prediction is not in fact parallelizable, no?",
"Yet this seems to be one of the main arguments the authors make in favor of the model (in contrast to RNN based methods).",
"In fact, I think the authors are proposing using the (aggregated) filter activation vectors (h^l(x)) in eq. 5,",
"but for some reason this is not made explicit.",
"Minor comments:- In Eq. 4, should the product be element-wise to realize the desired gating (as per the Dauhpin paper)?",
"This should be made explicit in the notation.",
"- On the bottom of page 3, the authors claim \"Expanding the prediction to multiple words makes the problem more difficult since the only way to achieve that is by 'understanding' the preceding sequence.\"",
"This claim should either by made more precise or removed.",
"It is not clear exactly what is meant here, nor what evidence supports it.",
"- Commas are missing in a few.",
"For example on page 2, probably want a comma after \"in parallel\" (before \"significantly\"); also after \"parallelize\" above \"Approach\".",
"- Page 4: \"In contrast, our model addresses only requires\"",
"--> drop the \"addresses\"."
] | [
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"non-arg",
"evaluation",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"request",
"fact",
"fact",
"evaluation",
"evaluation",
"request",
"request",
"request",
"request",
"fact",
"evaluation",
"request",
"fact",
"fact",
"fact",
"fact",
"fact",
"request",
"request",
"fact",
"request",
"evaluation",
"fact",
"request",
"quote",
"request"
] |
HJZM5e9eM | [
"Summary This article considers neural networks over time-series, defined as a succession of convolutions and fully-connected layers with Leaky ReLU activations.",
"The authors provide relatively general conditions for transformations described by such networks to admit a Lipschitz-continuous inverse.",
"They extend these results to the case where the first layer is a convolution with irregular sampling.",
"Finally, they show that the first convolutional filters can be chosen so as to represent a discrete wavelet transform, and provide some numerical experiments.",
"Main remarks While the introduction seemed promising,",
"and I enjoyed the writing style,",
"I was disappointed with this article.",
"(1) There are many mistakes in the mathematical statements.",
"First, in Theorem 1.1, I do not think that phi_L \\circ ... \\circ phi_1 \\circ F is a non-linear frame,",
"because I do not see why it should be of the form of Definition 1.2 (what would be the functions psi_n?).",
"For the same reason, I also do not understand Theorem 1.2.",
"In Proof 1.4, the line of equalities after « Also with the Plancherel formula » is, in my opinion, not true,",
"because the L^2 norm of a product of functions is not the product of the L^2 norms of the functions.",
"It also seems to me that Theorem 1.3, from [Benedetto, 1992], is incorrect:",
"it is not the limit of t_n/n that must be larger than 2R, but the limit of N_n/n (with N_n the number of t_i's that belong to the interval [-n;n]),",
"and there must probably be a compatibility condition between (t_n)_n and R_1, not only between (t_n)_n and R.",
"In Proposition 1.6, I think that the equality should be a strict inequality.",
"Additionally, I do not say that Proof 2.1 is not true,",
"but the fact that the undersampling by a factor 2 does not prevent the operator from being a frame should be justified.",
"(2) The authors do not justify, in the introduction, why admitting a continuous inverse should be a crucial criterion of quality for the representation described by a neural network.",
"Additionally, the existence of this continous inverse relies on the fact that the non-linearity that is used is a Leaky ReLU,",
"which looks a bit like \"cheating\" to me,",
"because the Lipschitz constant of the inverse of a Leaky ReLU, although finite, is large,",
"so it seems to me that cascading several layers with Leaky ReLUs could encode a transformation with strictly positive, but still very poor frame bounds.",
"(3) I also do not understand why having \"orthogonal outputs\", as in Section 2, is really desirable;",
"I think that it should be better justified.",
"Also, there are probably other ways to achieve orthogonality than using wavelets in the first layer,",
"so the fact that wavelets achieve orthogonality does not really justify why using wavelets in the first layer is a good choice, compared to other filters.",
"(4) I had understood in the introduction that the authors would explain how to define a (good) deep representation for data of the form (x_n)_{n\\in\\N}, where each x_n would be the value of a time series at instant t_n, with the t_n non-uniformly spaced.",
"But all the representations considered in the article seem to be applicable to functions in L^2(\\R) only (like in Theorem 1.4 and Theorem 2.2), and not to sequences (x_n)_{n\\in\\N}.",
"There is something that I did not get here.",
"Minor remarks - Fourth paragraph, third line: \"this generalization frames\"?",
"- Last paragraph before \"Contributions & Organization\": \"that that\".",
"- Paragraph about notations: it seems to me that what is defined as l^2(R) is denoted as l^2(Z) after the introduction.",
"- Last line of this paragraph: R^d_1 should be R^{d_1}, and R^d_2 R^{d_2}.",
"- I think \"smooth\" could be replaced by \"continuous\"",
"(smoothness implies a notion of differentiability).",
"- Paragraph before Proposition 1.1: \\sqrt{s} is not defined, and \"is supported\" should be \"are supported\".",
"- Theorem 1.1: the f_k should be phi_k.",
"- Definition 1.4: \"piece-linear\" -> \"piecewise linear\"?",
"- Lemma 1.2 and Proof 1.4: there are indices missing to \\tilde h and \\tilde g.",
"- Proof 1.4: \"and finally\" -> \"And finally\".",
"- Proof 1.5: I do not understand the grammatical structure of the second sentence.",
"- Proposition 1.4: the definition of a RNN is the same as definition 1.2 (except for the frame bounds);",
"I do not see why such transformations should model RNNs.",
"- Paragraph before Proposition 1.5: \"in,formation\".",
"- Proposition 1.6: it should be said on which space the frame is injective.",
"- On page 8, \"Lipschitz\" is erroneously written (twice).",
"- Proposition 1.7: \"ProjW,l\"?",
"- Definition 2.1: in the \"nested\" property, I think that the inclusion should be the other way around.",
"- Before Theorem 2.1, the sentence \"Such Riesz basis is proven\" is unclear to me.",
"- Theorem 2.1: \"filters convolution filters\".",
"- I think the architecture described in Theorem 2.2 could be clarified;",
"I am not exactly sure where all the arrows start from.",
"- First line of Subsection 2.3: \". is always\" -> \"is always\".",
"- First paragraph of Subsection 3.2: \"the the\".",
"- Paragraph 3.2: could the previous algorithms developed for this dataset be described in slightly more detail?",
"I also do not understand the meaning of \"must solely leverage the temporal structure\".",
"- I think that the section about numerical experiments could be slightly rewritten, so that the architecture used in each experiment is clearer.",
"In Paragraph 3.2 in particular, I did not get why the architecture presented in Figure 6 has far fewer parameters than the one in Figure 5;",
"it would help if the authors clearly precised how many parameters each layer contains.",
"- Conclusion: \"we can to\" -> \"we can\".",
"- Definition 4.1: p_v(s) -> p_v(t)."
] | [
"fact",
"evaluation",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"non-arg",
"request",
"fact",
"fact",
"evaluation",
"fact",
"evaluation",
"evaluation",
"request",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"non-arg",
"evaluation",
"request",
"evaluation",
"request",
"request",
"evaluation",
"request",
"request",
"request",
"request",
"request",
"evaluation",
"fact",
"evaluation",
"non-arg",
"request",
"fact",
"non-arg",
"request",
"evaluation",
"non-arg",
"request",
"evaluation",
"request",
"non-arg",
"request",
"evaluation",
"request",
"evaluation",
"request",
"request",
"request"
] |
Hyd9YyOlf | [
"The paper studies the problem of DNN loss function design for reducing intra-class variance in the output feature space. ",
"The key contribution is proposing an isotropic variant of the softmax loss that can balance the accuracy of classification and compactness of individual class. ",
"The proposed loss has been compared extensively against a number of closely related approaches in methodology. ",
"Numerical results on benchmark datasets show some improvement of the proposed loss over softmax loss and center loss (Wen et al., 2016), when applied to distance-based classifiers such as k-NN and k-means. ",
"Pros: - The idea of isotropic normalization for enhancing compactness of class is well motivated",
"- The paper is mostly clearly organized and presented.",
"- Numerical study shows some promise of the proposed method.",
"Cons: - The novelty of method is mostly incremental given the prior work of (Wen et al., 2016) which has provided a slightly different isotropic variant of softmax loss.",
"- The training procedure of the proposed method remains unclear in this paper."
] | [
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation"
] |
By-CxBKgz | [
"This paper presents Defense-GAN: a GAN that used at test time to map the input generate an image (G(z)) close (in MSE(G(z), x)) to the input image (x), by applying several steps of gradient descent of this MSE. ",
"The GAN is a WGAN trained on the train set (only to keep the generator). ",
"The goal of the whole approach is to be robust to adversarial examples, without having to change the (downstream task) classifier, only swapping in the G(z) for the x.",
"+ The paper is easy to follow.",
"+ It seems (but I am not an expert in adversarial examples) to cite the relevant litterature (that I know of) and compare to reasonably established attacks and defenses.",
"+ Simple/directly applicable approach that seems to work experimentally, ",
"but - A missing baseline is to take the nearest neighbour of the (perturbed) x from the training set.",
"- Only MNIST-sized images, and MNIST-like (60k train set, 10 labels) datasets: MNIST and F-MNIST.",
"- Between 0.043sec and 0.825 sec to reconstruct an MNIST-sized image.",
"? MagNet results were very often worse than no defense in Table 4, ",
"could you comment on that?",
"- In white-box attacks, it seems to me like L steps of gradient descent on MSE(G(z), x) should be directly extended to L steps of (at least) FGSM-based attacks, at least as a control."
] | [
"fact",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"non-arg",
"request"
] |
r1ke1YDlz | [
"SIGNIFICANCE AND ORIGINALITY: The authors propose to accelerate the learning of complex tasks by exploiting traces of experts.",
"Unlike the most common form of imitation learning or behavioral cloning, the authors formulate their solution in the case where the expert’s state trajectory is observable, but the expert’s actions are not. ",
"This is an important and useful problem in robotics and other applications. ",
"Within this specific setting the authors differentiate their approach from others by developing a solution that does NOT estimate an explicit dynamics model ( e.g., P( S’ | S, A ) ).",
"The benefits of not estimating an explicit action model are not really demonstrated in a clear way.",
"The author’s articulate a specific solution that provides heuristic guidance rewards that cause the learner to favor actions that achieve subgoals calculated from expert behavior and refactors the representation of the Q function so that it has a component that is a function of the subgoal extracted from the expert.",
"These subgoals are linear functions of the expert’s change in state (or change in state features).",
"The resultant policy is a function of the expert traces on which it depends.",
"The authors show they can retrain a new policy that does not require the expert traces.",
"As far as I am aware, this is a novel approach to the problem. ",
"The authors claim that this factorization is important and useful ",
"but the paper doesn’t really illustrate this well.",
"They demonstrate the usefulness of the algorithm against a DQN baseline on Doom game problems.",
"The algorithm learns faster than unassisted DQN as shown by learning curve plots. ",
"They also evaluate the algorithms on the quality of the final policies for their approach, DQN, and a supervised learning from demonstration approach ( LfD ) that requires expert actions.",
"The proposed approach does as well or better than competing approaches.",
"QUALITY Ablation studies show that the guidance rewards are important to achieving the improved performance of the proposed method which is important confirmation that the architecture is working in the intended way. ",
"However, it would also be useful to do an ablation study of the “factorization” of action values. ",
"Is this important to achieving better results as well or is the guidance reward enough? ",
"This seems like a key claim to establish.",
"CLARITY The details of the memory based kernel density estimation and neural gradient training seemed complicated by the way that the process was implemented. ",
"Is it possible to communicate the intuitions behind what is going on?",
"I was able to work out the intuitions behind the heuristic rewards, but I still don’t clearly get what the Q-value factorization is providing:",
"To keep my text readable, I assume we are working in feature space instead of state space and use different letters for learner and expert:",
"Learner: S = \\phi(s) Expert’s i^th state visit: Ei = \\phi( \\hat{s}_i } where Ei’ is the successor state to Ei",
"The paper builds upon approximate n-step discrete-action Q-learning where the Q value for an action is a linear function of the state features: Qp(S,a) = Wa S + Ba where parameters p = ( Wa, Ba ).",
"After observing an experience ( S,A,R,S’ ) we use Bellman Error as a loss function to optimize Qp for parameter p.",
"I ignore the complexities of n-step learning and discount factors for clarity.",
"Loss = E[ R + MAXa’ Qp(S’,a’) - Qp(S,a) ] ",
"The authors suggest we can augment the environment reward R with a heuristic reward Rh proportional to the similarity between the learner “subgoal\" and the expert “subgoal\" in similar states. ",
"The authors propose to use cosine distance between representations of what they call the “subgoals” of learner and expert. ",
"A subgoal is defined as a linear transformation of the distance traveled by an agent during a transition.",
"The heuristic reward is proportional to the cosine distance between the learner and expert “subgoals\" Rh = B < Wv LearnerDirectionInStateS, Wv ExpectedExpertDirectionInStatesSimilarToS > The learner’s direction in state S is just (S-S’) in feature space.",
"The authors model the behavior of the expert as a kernel density type approximator giving the expected direction of the expert starting from a states similar to the one the learner is in. ",
"Let < Wk S, Wk Ej > be a weighted similarity between learner state features S and expert state features Ej and Ej’ be the successor state features encountered by the expert.",
"Then the expected expert direction for learner state S is: SUMj < Wk S, Wk Ej > ( Ej - Ej’ ) ",
"Presumably the linear Wk transform helps us pick out the important dimensions of similarity between S and Ej.",
"Mapping the learner and expert directions into subgoal space using Wv, the heuristic reward is Rh = B < Wv (S-S’), Wv SUMj < Wk S, Wk Ej > ( Ej - Ej’ ) >",
"I ignore the ReLU here, but I assume that is operates element-wise and just clips negative values?",
"There is only one layer here ",
"so we don’t have complex non-linear things going on?",
"In addition to introducing a heuristic reward term, the authors propose to alter the Q-function to be specific to the subgoal.",
"Q( s,a,g ) = g(S) Wa S + Ba",
"The subgoal is the same as the first part, namely a linear transform of the expected expert direction in states similar to state S.",
"g(S) = Wv SUMj < Wk S, Wk Ej > ( Ej - Ej’ ) ",
"So in some sense, the Q function is really just a function of S, as g is calculated from S.",
"Q( S,a ) = g(S) Wa S + Ba ",
"So this allows the Q-function more flexibility to capture each subgoal in a different linear space?",
"I don’t really get the intuition behind this formulation. ",
"It allows the subgoal to adjust the value of the underlying model? ",
"Essentially the expert defines a new Q-value problem at every state for the learner? ",
"In some sense are we are defining a model for the action taken by the expert?",
"ADDITIONAL THOUGHTS While the authors compare to an unassisted baseline, they don’t compare to methods that use an action model",
"which is not a fatal flaw but would have been nice. ",
"One can imagine there might be scenarios where the local guidance rewards of this form could be problematic, particularly in scenarios where the expert and learner are not identical",
"and it is possible to return to previous states, such as the grid worlds the authors discuss:",
"If the expert’s first few transitions were easily approximable, the learner would get local rewards that cause it to mimic expert behavior.",
"However, if the next step in the expert’s path was difficult to approximate, then the reward for imitating the expert would be lower.",
"Would the learner then just prefer to go back towards those states that it can approximate and endlessly loop?",
"In this case, perhaps expressing heuristic rewards as potentials as described in Ng’s shaping paper might solve the problem.",
"PROS AND CONS Important problem generally. ",
"Avoiding the estimation of a dynamics model was stated as a given, but perhaps more could be put into motivating this goal. ",
"Hopefully it is possible to streamline the methodology section to communicate the intuitions more easily."
] | [
"fact",
"fact",
"evaluation",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"request",
"request",
"evaluation",
"evaluation",
"request",
"evaluation",
"non-arg",
"non-arg",
"evaluation",
"fact",
"non-arg",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"request",
"evaluation",
"request",
"fact",
"request",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"fact",
"request",
"request",
"evaluation",
"request",
"request"
] |
ByGPUUYgz | [
"This paper attacks an important problems with an interesting and promising methodology. ",
"The authors deal with inference in models of collective behavior, specifically at how to infer the parameters of a mean field game representation of collective behavior. ",
"The technique the authors innovate is to specify a mean field game as a model, and then use inverse reinforcement learning to learn the reward functions of agents in the mean field game.",
"This work has many virtues, and could be an impactful piece. ",
"There is still minimal work at the intersection of machine learning and collective behavior, ",
"and this paper could help to stimulate the growth of that intersection. ",
"The application to collective behavior could be an interesting novel application to many in machine learning, ",
"and conversely the inference techniques that are innovated should be novel to many researchers in collective behavior.",
"At the same time, the scientific content of the work has critical conceptual flaws. ",
"Most fundamentally, the authors appear to implicitly center their work around highly controversial claims about the ontological status of group optimization, without the careful justification necessary to make this kind of argument. ",
"In addition to that, the authors appear to implicitly assume that utility function inference can be used for causal inference. ",
"That is, there are two distinct mistakes the authors make in their scientific claims:",
"1) The authors write as if mean field games represent population optimization ",
"(Mean field games are not about what a _group_ optimizes; they are about what _individuals_ optimize, and this individual optimization leads to certain patterns in collective behaviors)",
"2) The authors write as if utility/reward function inference alone can provide causal understanding of collective or individual behavior",
"1 - I should say that I am highly sympathetic to the claim that many types of collective behavior can be viewed as optimizing some kind of objective function. ",
"However, this claim is far from mainstream, and is in fact highly contested. ",
"For instance, many prominent pieces of work in the study of collective behavior have highlighted its irrational aspects, from the madness of crowds to herding in financial markets.",
"Since it is so fringe to attribute causal agency to groups, let alone optimal agency, ",
"in the remainder of my review I will give the authors the benefit of the doubt and assume when they say things like \"population behavior may be optimal\", they mean \"the behavior of individuals within a population may be optimal\". ",
"If the authors do mean to say this, they should be more careful about their language use in this regard (individuals are the actors, not populations). ",
"If the authors do indeed mean to attribute causal agency to groups (as suggested in their MDP representation), they will run into all the criticisms I would have about an individual-level analysis and more. ",
"Suffice it to say, mean field games themselves don't make claims about aggregate-level optimization. ",
"A Nash equilibrium achieves a balance between individual-level reward functions. ",
"These reward functions are only interpretable at the individual level. ",
"There is no objective function the group itself in aggregate is optimizing in mean field games. ",
"For instance, even though the mean field game model of the Mexican wave produces wave solutions, ",
"the model is premised on people having individual utility functions that lead to emergent wave behavior. ",
"The model does not have the representational capacity to explain that people actually intend to create the emergent behavior of a wave (even though in this case they do). ",
"Furthermore, the fact that mean field games aggregate to a single-agent MDP does not imply that that the group can rightfully be thought of as an agent optimizing the reward function, ",
"because there is an exact correspondence between the rewards of the individual agents in the MFG and of the aggregate agent in the MDP by construction.",
"2 - The authors also claim that their inference methods can help explain why people choose to talk about certain topics. ",
"As far as the extent to which utility / reward function inference can provide causal explanations of individual (or collective) behavior, the argument that is invariably brought against a claim of optimization is that almost any behavior can be explained as optimal post-hoc with enough degrees of freedom in the utiliy function of the behavioral model. ",
"Since optimization frameworks are so flexible, ",
"they have little explanatory power and are hard to falsify. ",
"In fact, there is literally no way that the modeling framework of the authors even affords the possibility that individual/collective behavior is not optimal. ",
"Optimality is taken as an assumption that allows the authors to infer what reward function is being optimized. ",
"The authors state that the reward function they infer helps to interpret collective behavior ",
"because it reveals what people are optimizing. ",
"However, the reward function actually discovered is not interpretable at all. ",
"It is simply a summary of the statistical properties of changes in popularity of the topics of conversation in the Twitter data the authors' study. ",
"To quote the authors' insights: \"The learned reward function reveals that a real social media population favors states characterized by a highly non-uniform distribution with negative mass gradient in decreasing order of topic popularity, as well as transitions that increase this distribution imbalance.\" ",
"The authors might as well have simply visualized the topic popularities and changes in popularities to arrive at such an insight. ",
"To take the authors claims literally, we would say that people have an intrinsic preference for everyone to arbitrarily be talking about the same thing, regardless of the content or relevance of that topic. ",
"To draw an analogy, this is like observing that on some days everybody on the street is carrying open umbrellas and on other days not, and inferring that the people on the street have a preference for everyone having their umbrellas open together (and the model would then predict that if one person opens an umbrella on a sunny day, everybody else will too).",
"To the authors credit, they do make a brief attempt to present empirical evidence for their optimization view, stating succinctly: \"The high prediction accuracy of the learned policy provides evidence that real population behavior can be understood and modeled as the result of an emergent population-level optimization with respect to a reward function.\" ",
"Needless to say, this one-sentence argument for a highly controversial scientific claims falls flat on closer inspection. ",
"Setting aside the issues of correlation versus causation, predictive accuracy does not in and of itself provide scientific plausibility. ",
"When an n-gram model produces text that is in the style of a particular writer, we do not conclude that the writer must have been composing based on the n-gram's generative mechanism. ",
"Predictive accuracy only provides evidence when combined in the first place with scientific plausibility through other avenues of evidence.",
"The authors could attempt to address these issues by making what is called an \"as-if\" argument, ",
"but it's not even clear such an argument could work here in general. ",
"With all this in mind, it would be more instructive to show that the inference method the authors introduce could infer the correct utility functions used in standard mean field games, such as modeling traffic congestion and the Mexican wave. ",
"-- All that said, the general approach taken in the authors' work is highly promising, ",
"and there are many fruitful directions I would be exicted to see this work taken --- e.g., combining endogenous and exogenous rewards or looking at more complex applications. ",
"As a technical contribution, the paper is wonderful, ",
"and I would enthusiastically support acceptance. ",
"The authors simply either need to be much more careful with the scientific claims about collective behavior they make, or limit the scope of the contribution of the paper to be modeling / inference in the area of collective behavior. ",
"Mean field games are an important class of models in collective behavior, ",
"and being able to infer their parameters is a nice step forward purely due to the importance of that class of games. ",
"Identifying where the authors' inference method could be applied to draw valid scientific conclusions about collective behavior could then be an avenue for future work. ",
"Examples of plausible scientific applications might include parameter inference in settings where mean field games are already typically applied in order to improve the fit of those models or to learn about trade-offs people make in their utility functions in those settings.",
"-- Other minor comments: - (Introduction) It is not clear at all how the Arab Spring, Black Lives Matter, and fake news are similar --- i.e., whether a single model could provide insight into these highly heterogeneous events ",
"--- nor is it clear what end the authors hope to achieve by modeling them ",
"--- the ethics of modeling protests in a field crowded with powerful institutional actors is worth carefully considering.",
"- If I understand correctly, the fact that the authors assume a factored reward function seems limiting. ",
"Isn't the major benefit of game theory it's ability to accommodate utility functions that depend on the actions of others?",
"- The authors state that one of their essential insights is that \"solving the optimization problem of a single-agent MDP is equivalent to solving the inference problem of an MFG.\" ",
"This statement feels a bit too cute at the expense of clarity. ",
"The authors perform inference via inverse-RL, ",
"so it's more clear to say the authors are attempting to use statistical inference to figure out what is being optimized.",
"- The relationship between MFGs and a single-agent MDP is nice and a fine observation, but not as surprising as the authors frame it as. ",
"Any multiagent MDP can be naively represented as a single-agent MDP where the agent has control over the entire population, ",
"and we already know that stochastic games are closely related to MDPs. ",
"It's therefore hard to imagine that there woudn't be some sort of correspondence."
] | [
"evaluation",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"request",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"quote",
"request",
"fact",
"evaluation",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"request",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"request",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"fact",
"evaluation"
] |
B1fZIQcxM | [
"The paper is not anonymized.",
"In page 2, the first line, the authors revealed [15] is a self-citation",
"and [15] is not anonumized in the reference list."
] | [
"evaluation",
"fact",
"fact"
] |
H1asng9lG | [
"This paper introduces a new exploration policy for Reinforcement Learning for agents on the web called \"Workflow Guided Exploration\".",
"Workflows are defined through a DSL unique to the domain.",
"The paper is clear, very well written, and well-motivated.",
"Exploration is still a challenging problem for RL.",
"The workflows remind me of options though in this paper they appear to be hand-crafted.",
"In that sense, I wonder if this has been done before in another domain.",
"The results suggest that WGE sometimes helps but not consistently.",
"While the experiments show that DOMNET improves over Shi et al, that could be explained as not having to train on raw pixels or not enough episodes."
] | [
"fact",
"fact",
"evaluation",
"fact",
"evaluation",
"non-arg",
"fact",
"evaluation"
] |
HkgrJeEgM | [
"This paper studies the question: Why does SGD on deep network is often successful, despite the fact that the objective induces bad local minima?",
"The approach in this paper is to study a standard MNN with one hidden layer. ",
"They show that in an overparametrized regime, where the number of parameters is logarithmically larger than the number of parameters in the input, the ratio between the number of (bad) local minima to the number of global minima decays exponentially. ",
"They show this for a piecewise linear activation function, and input drawn from a standard Normal distribution. ",
"Their improvement over previous work is that the required overparameterization is fairly moderate, and that the network that they considered is similar to ones used in practice. ",
"This result seems interesting, ",
"although it is clearly not sufficient to explain even the success on the setting studied in this paper, ",
"since the number of minima of a certain type does not correspond to the probability of the SGD ending in one: ",
"to estimate the latter, the size of each basin of attraction should be taken into account. ",
"The authors are aware of this point and mention it as a disadvantage. ",
"However, since this question in general is a difficult one, ",
"any progress might be considered interesting. ",
"Hopefully, in future work it would be possible to also bound the probability of starting in one of the basins of attraction of bad local minima.",
"The paper is well written and well presented, ",
"and the limitations of the approach, as well as its advantages over previous work, are clearly explained. ",
"As I am not an expert on the previous works in this field, my judgment relies mostly on this work and its representation of previous work. ",
"I did not verify the proofs in the appendix."
] | [
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"non-arg",
"non-arg"
] |
SJs7uYYeM | [
"At the heart of the paper, there is a single idea: to decouple the weight decay from the number of steps taken by the optimization process (the paragraph at the end of page 2 is the key to the paper). ",
"This is an important and largely overlooked area of implementation ",
"and most off-the-shelf optimization algorithms, unfortunately, miss this point, too. ",
"I think that the proposed implementation should be taken seriously, especially in conjunction with the discussion that has been carried out with the work of Wilson et al., 2017 ",
"(https://arxiv.org/abs/1705.08292).",
"The introduction does a decent job explaining why it is necessary to pay attention to the norm of the weights as the training progresses within its scope. ",
"However, I would like to add a couple more points to the discussion: - \"Optimal weight decay is a function (among other things) of the total number of epochs / batch passes.\" ",
"in principle, it is a function of weight updates. ",
"Clearly, it depends on the way the decay process is scheduled. ",
"However, there is a bad habit in DL where time is scaled by the number of epochs rather than the number of weight updates which sometimes lead to misleading plots (for instance, when comparing two algorithms with different batch sizes).",
"- Another ICLR 2018 submission has an interesting take on the norm of the weights and the algorithm ",
"(https://openreview.net/forum?id=HkmaTz-0W¬eId=HkmaTz-0W). ",
"Figure 3 shows the histograms of SGD/ADAM with and without WD (the *un-fixed* version), ",
"and it clearly shows how the landscape appear misleadingly different when one doesn't pay attention to the weight distribution in visualizations. ",
"- In figure 2, it appears that the training process has three phases, an initial decay, a steady progress, and a final decay that is more pronounced in AdamW. ",
"This final decay also correlates with the better test error of the proposed method. ",
"This third part also seems to correspond to the difference between Adam and AdamW through the way they branch out after following similar curves. ",
"One wonders what causes this branching and whether the key the desired effects are observed at the bottom of the landscape.",
"- The paper concludes with \"Advani & Saxe (2017) analytically showed that in the limited data regime of deep networks the presence of eigenvalues that are zero forms a frozen subspace in which no learning occurs and thus smaller (e.g., zero) initial weight norms should be used to achieve best generalization results.\" ",
"Related to this there is another ICLR 2018 submission ",
"(https://openreview.net/forum?id=rJrTwxbCb), ",
"figure 1 shows that the eigenvalues of the Hessian of the loss have zero forms at the bottom of the landscape, not at the beginning. ",
"Back to the previous point, maybe that discussion should focus on the second and third phases of the training, not the beginning. ",
"- Finally, it would also be interesting to discuss the relation of the behavior of the weights at the last parts of the training and its connection to pruning. ",
"I'm aware that one can easily go beyond the scope of the paper by adding more material. ",
"Therefore, it is not completely reasonable to expect all such possible discussions to take place at once. ",
"The paper as it stands is reasonably self-contained and to the point. ",
"Just a minor last point that is irrelevant to the content of the work: The slash punctuation mark that is used to indicate 'or' should be used without spaces as in 'epochs/batch'."
] | [
"fact",
"evaluation",
"evaluation",
"evaluation",
"reference",
"evaluation",
"quote",
"fact",
"fact",
"evaluation",
"evaluation",
"reference",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"request",
"fact",
"fact",
"reference",
"fact",
"request",
"request",
"evaluation",
"evaluation",
"evaluation",
"request"
] |
HkdTXw1bM | [
"The paper takes an interesting approach to solve the existing problems of GAN training, using Coulomb potential for addressing the learning problem. ",
"It is also well written with a clear presentation of the motivation of the problems it is trying to address, the background and proves the optimality of the suggested solution. ",
"My understanding and validity of the proof is still an educated guess. ",
"I have been through section A.2 , but I'm unfamiliar with the earlier literature on the similar topics so I would not be able to comment on it. ",
"Overall, I think this is a good paper that provides a novel way of looking at and solving problems in GANs. ",
"I just had a couple of points in the paper that I would like some clarification on : ",
"* In section 2.2.1 : The notion of the generated a_i not disappearing is something I did not follow. ",
"What does it mean for a generated sample to \"not disappear\" ? ",
"and this directly extends to the continuity equation in (2). ",
"* In section 1 : in the explanation of the 3rd problem that GANs exhibit i.e. the generator not being able to generalize the distribution of the input samples, I was hoping if you could give a bit more motivation as to why this happens. ",
"I don't think this needs to be included in the paper, ",
"but would like to have it for a personal clarification."
] | [
"evaluation",
"evaluation",
"non-arg",
"non-arg",
"evaluation",
"evaluation",
"evaluation",
"request",
"fact",
"request",
"evaluation",
"request"
] |
rycZrCJef | [
"Authors of this paper derived an efficient quantum-inspired learning algorithm based on a hierarchical representation that is known as tree tensor network, which is inspired by the multipartite entanglement renormalization ansatz approach where the tensors in the TN are kept to be unitary during training. ",
"Some observations are: The limitation of learnability of TTN strongly depends on the physical indexes and the geometrical indexes determine how well the TTNs approximate the limit; ",
"TTNs exhibit same increase level of abstractions as CNN or DBN; ",
"Fidelity and entanglement entropy can be considered as some measurements of the network.",
"Authors introduced the two-dimensional hierarchical tensor networks for solving image recognition problems, ",
"which suits more the 2-D nature of images. ",
"In section 2, authors stated that the choice of feature function is arbitrary, ",
"and a specific feature map was introduced in Section 4. ",
"However, it is not straightforward to connect (10) to (1) or (2). ",
"It is better to clarify this connection ",
"because some important parameters such as the virtual bond and input bond are related to the complexity of the proposed algorithm as well as the limitation of learnability. ",
"For example, the scaling of the complexity O(dN_T(b_v^5 + b_i^4)) is not easy to understand. ",
"Is it related to specific feature map? ",
"How about the complexity of eigen-decomposition for one tensor at each iterates. ",
"And also, whether the tricks used to accelerate the computations will affect the convergence of the algorithm? ",
"More details on these problems are required for readers’ better understanding.",
"From Fig 2, it is difficult to see the relationship between learnability and parameters such input bond and virtual bond ",
"because it seems there are no clear trends in the Fig 2(a) and (b) to make any conclusion. ",
"It is better to clarify these relationships with either clear explanation or better examples.",
"From Fig 3, authors claimed that TN obtained the same levels of abstractions as in deep learning. ",
"However, from Fig 3 only, it is hard to make this conclusion. ",
"First, there are not too many differences from Fig 3(a) to Fig 3(e). ",
"Second, there is no visualization result reported from deep learning on the same data for comparison. ",
"Hence, it is not convincing to draw this conclusion only from Fig 3. ",
"In Section 4.2, what strategy is used to obtain these parameters in Table 1?",
"In Section 5, it is interesting to see more experiments in terms of fidelity and entanglement entropy."
] | [
"fact",
"fact",
"fact",
"evaluation",
"fact",
"evaluation",
"fact",
"fact",
"evaluation",
"request",
"evaluation",
"evaluation",
"request",
"request",
"request",
"request",
"evaluation",
"evaluation",
"request",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"non-arg",
"request"
] |
H1PuapUef | [
"*Paper summary* The paper considers GANs from a theoretical point of view. ",
"The authors approach GANs from the 3-Wasserstein point of view and provide several insights for a very specific setting. ",
"In my point of view, the main novel contribution of the paper is to notice the following fact: (*) It is well known that the 2-Wasserstein distance W2(PY,QY) between multivariate Gaussian PY and its empirical version QY scales as $n^{-2/d}$, i.e. converges very slow as the dimensionality of the space $d$ increases. ",
"In other words, QY is not such a good way to estimate PY in this setting. ",
"A somewhat better way is use a Gaussian distribution PZ with covariance matrix S computed as a sample covariance of QY. ",
"In this case W2(PY, PZ) scales as $\\sqrt{d/n}$.",
"The paper introduces this observation in a very strange way within the context of GANs. ",
"Moreover, I think the final conclusion of the paper (Eq. 19) has a mistake, ",
"which makes it hard to see why (*) has any relation to GANs at all.",
"There are several other results presented in the paper regarding relation between PCA and the 2-Wasserstein minimization for Gaussian distributions (Lemma 1 & Theorem 1). ",
"This is indeed an interesting point, ",
"however the proof is almost trivial ",
"and I am not sure if this provides any significant contribution for the future research.",
"Overall, I think the paper contains several novel ideas, ",
"but its structure requires a *significant* rework ",
"and in the current form it is not ready for being published. ",
"*Detailed comments* In the first part of the paper (Section 2) the authors propose to use the optimal transport distance Wc(PY, g(PX)) between the data distribution PY (or its empirical version QY) and the model as the objective for GAN optimization. ",
"This idea is not novel: ",
"WGAN [1] proposed (and successfully implemented) to minimize the particular case of W1 distance by going through the dual form, ",
"[2] proposed to approach any Wc using auto-encoder reformulation of the primal (and also shoed that [5] is doing exactly W2 minimization), ",
"and [3] proposed the same using Sinkhorn algorithm. ",
"So this point does not seem to be novel.",
"The rest of the paper only considers 2-Wasserstein distance with Gaussian PY and Gaussian g(PX) (which I will abbreviate with R), ",
"which looks like an extremely limited scenario (and certainly has almost no connection to the applications of GANs).",
"Section 3 first establishes a relation between PCA and minimizing 2-Wasserstein distance for Gaussian distributions (Lemma 1, Theorem 1). ",
"Then the authors show that if R minimizes W2(PY, R) and QR minimizes W2(QY, QR) then the excess loss W2(PY, QR) - W2(PY, R) approaches zero at the rate $n^{-2/d}$ (both for linear and unconstrained generators). ",
"This result basically provides an upper bound showing that GANs need exponentially many samples to minimize W2 distance. ",
"I don't find these results novel, ",
"as they already appeared in [4] with a matching lower bound for the case of Gaussians ",
"(Theorem B.1 in Appendix can be modified easily to show this). ",
"As the authors note in the conclusion of Section 3, these results have little to do with GANs, ",
"as GANs are known to learn quite quickly ",
"(which contradicts the theory of Section 3).",
"Finally, in Section 4 the authors approach the same W2 problem from its dual form and notice that for the LQG model the optimal discriminator is quadratic. ",
"Based on this they reformulate the W2 minimization for LQG as the constrained optimization with respect to p.d. matrix A (Eq 16). ",
"The same conclusion does not work unfortunately for W2(QY, R), ",
"which is the real training objective of GANs. ",
"Theorem 3 shows that nevertheless, if we still constrain discriminator in the dual form of W2(QY, R) to be quadratic, the resulting soliton QR* performs the empirical PCA of Pn. ",
"This leads to the final conclusion of the paper, ",
"which I think contains a mistake. ",
"In Eq 19 the first equation, according to the definitions of the authors, reads \\[W2(PY, QR) = W2(PY, PZ), (**)\\] where QR is trained to minimize min_R W2(QY, R) and PZ is as defined in (*) in the beginning of these notes. ",
"However, PZ is not the solution of min_R W2(QY, R) ",
"as the authors notice in the 2nd paragraph of page 8. ",
"Thus (**) is not true ",
"(at least, it is not proved in the current version of the text). ",
"PZ is a solution of min_R W2(QY, R) *where the discriminator is constrained to be quadratic*. ",
"This mismatch is especially strange, ",
"given the authors emphasize in the introduction that they provide bounds on divergences which are the same as used during the training (see 2nd paragraph on page 2) ",
"--- here the bound is on W2, but the empirical GAN actually does a regularized training (with constrained discriminator).",
"Finally, I don't think the experiments provide any convincing insights, ",
"because the authors use W1-minimization to illustrate properties of the W2. ",
"Essentially the authors say \"we don't have a way to perform W2 minimization, so we rather do the W1 minimization and assume that these two are kind of similar\".",
"* Other comments * (1) Discussion in Section 2.1 seems to never play a role in the paper.",
"(2) Page 4: in p-Wasserstein distance, ||.|| does not need to be a Euclidean metric. ",
"It can be any metric.",
"(3) Lemma 2 seems to repeat the result from (Canas and Rosasco, 2012) ",
"as later cited by authors on page 7?",
"(4) It is not obvious how does Theorem 2 translate to the excess loss? ",
"(5) Section 4. I am wondering how exactly the authors are going to compute the conjugate of the discriminator, given the discriminator most likely is a deep neural network?",
"[1] Arjovsky et al., Wasserstein GAN, 2017",
"[2] Bousquet et al, From optimal transport to generative modeling: the VEGAN cookbook, 2017",
"[3] Genevay et al., Learning Generative Models with Sinkhorn Divergences, 2017",
"[4] Arora et al, Generalization and equilibrium in GANs, 2017",
"[5] Makhazani et al., Adversarial Autoencoders, 2015"
] | [
"fact",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"reference",
"reference",
"reference",
"reference",
"reference"
] |
rk_xMk8ef | [
"Summary This paper presents a dataset of mathematical equations and applies TreeLSTMs to two tasks: verifying and completing mathematical equations. ",
"For these tasks, TreeLSTMs outperform TreeNNs and RNNs. ",
"In my opinion, the main contribution of this paper is this potentially useful dataset, as well as an interesting way of representing fixed-precision floats. ",
"However, the application of TreeNNs and TreeLSTMs is rather straight-forward, ",
"so in my (subjective) view there are only a few insights salvageable for the ICLR community ",
"and compared to Allamanis et al. (2017) this paper is a rather incremental extension.",
"Strengths The authors present a new datasets for mathematical identities. ",
"The method for generating additional correct identities could be useful for future research in this area.",
"I find the representation of fixed-precision floats presented in this paper intriguing. ",
"I believe this contribution should be emphasized more ",
"as it allows the model to generalize to unseen numbers ",
"and I am wondering whether the authors see some wider application of this representation for neural programming models.",
"I liked the categorization of the related work.",
"Weaknesses p2: It is mentioned that the framework is the first to combine symbolic expressions with black-box function evaluations, ",
"but I would argue that Neural Programmer-Interpreters (NPI; Reed & De Freitas) are already doing that ",
"(see Fig 1 in that paper where the execution trace is a symbolic expression and some expressions \"Act(LEFT)\" are black-box function applications directly changing the image).",
"The differences to Allamanis et al. (2017) are not worked out well. ",
"For instance, the authors use the TreeNN model from that paper as a baseline ",
"but the EqNet model is not mentioned at all. ",
"The obvious question is whether EqNets can be applied to the two tasks (verifying and completing mathematical equations) and if so why this has not been done.",
"The contribution regarding black box function application is unclear to me. ",
"On page 6, it is unclear to me what \"handles […] function evaluation expressions\". ",
"As far as I understand, the TreeLSTM learns to the return value of function evaluation expressions in order to predict equality of equations, ",
"but this should be clarified.",
"I find the connection of the proposed model and task to \"neural programming\" weak. ",
"For instance, as far as I understand there is no support for stateful programs. ",
"Furthermore, it would be interesting to hear how this work can be applied to existing programming languages such as Haskell. ",
"What are the limitations of the architecture? ",
"Could it learn to identify equality of two lists in Haskell?",
"p6: The paragraph on baseline models is rather uninformative. ",
"TreeLSTMs have been shown to outperform Tree NN's in various prior work. ",
"The statement that \"LSTM cell […] helps the model to have a better understanding of the underlying functions in the domain\" is vague. ",
"LSTM cells compared to fully-connected layers in Tree NNs ameliorate vanishing and exploding gradients along paths in the tree. ",
"Furthermore, I would like to see a qualitative analysis of the reasoning capabilities that are mentioned here. ",
"Did you observe any systematic differences in the ~4% of equations where the TreeLSTM fails to generalize (Table 3; first column).",
"Minor Comments Abstract: \"Our framework generalizes significantly better\" I think it would be good to already mention in comparison to what this statement is.",
"p1: \"aim to solve tasks such as learn mathematical\" -> \"aim to solve tasks such as learning mathematical\"",
"p2: You could add a citation for Theano, Tensorflow and Mxnet.",
"p2: Could you elaborate how equation completion is used in Mathematical Q&A?",
"p3: Could you expand on \"mathematical equation verification and completion […] has broader applicability\" by maybe giving some concrete examples.",
"p3 Eq. 5: What precision do you consider? ",
"Two digits?",
"p3: \"division because that they can\" -> \"division because they can\"",
"p4 Fig. 1: Is there a reason 1 is represented as 10^0 here? ",
"Do you need the distinction between 1 (the integer) and 1.0 (the float)?",
"p5: \"we include set of changes\" -> \"we include the set of changes\"",
"p5: In my view there is enough space to move appendix A to section 2. ",
"In addition, it would be great to see more examples of generated identities at this stage (including negative ones).",
"p5: \"We generate all possible equations (with high probability)\"",
"– what is probabilistic about this?",
"p5: I don't understand why function evaluation results in identities of depth 2 and 3. ",
"Is it both or one of them?",
"p6: The modules \"symbol\" and \"number\" are not shown in the figure. ",
"I assume they refer to projections using Wsymb and Wnum?",
"p6: \"tree structures neural networks\" -> \"tree structured neural networks\"",
"p6: A reference for the ADAM optimizer should be added.",
"p6: Which method was used for optimizing these hyperparameters? ",
"If a grid search was used, what intervals were used?",
"p7: \"the superiority of Tree LSTM to Tree NN shows that is important to incorporate cells that have memory\" is not a novel insight.",
"p8: When you mention \"you give this set of equations to the models look at the top k predictions\" I assume you ranked the substituted equations by the probability that the respective model assigns to it?",
"p8: Do you have an intuition why prediction function evaluations for \"cos\" seem to plateau certain points? ",
"Furthermore, it would be interesting to see what effect the choice of non-linearity on the output of the TreeLSTM has on how accurately it can learn to evaluate functions. ",
"For instance, one could replace the tanh with cos and might expect that the model has now an easy time to learn to evaluate cos(x).",
"p8 Fig 4b; p9: Relating to the question regarding plateaus in the function evaluation: \"in Figure 4b […] the top prediction (0.28) is the correct value for tan with precision 2, but even other predictions are quite close\" – they are all the same and this bad, right?",
"p9: \"of the state-of-the-art neural reasoning systems\" is very broad and in my opinion misleading too. ",
"First, there are other reasoning tasks (machine reading/Q&A, Visual Q&A, knowledge base inference etc.) too ",
"and it is not obvious how ideas from this paper translate to these domains. ",
"Second, for other tasks TreeLSTMs are likely not state-of-the-art ",
"(see for example models on the SQuAD leaderboard: https://rajpurkar.github.io/SQuAD-explorer/) .",
"p9: \"exploring recent neural models that explicitly use memory cells\" ",
"– I think what you mean is models with addressable differentiable memory."
] | [
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"request",
"fact",
"request",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"request",
"evaluation",
"evaluation",
"fact",
"request",
"evaluation",
"fact",
"request",
"request",
"request",
"evaluation",
"fact",
"evaluation",
"fact",
"request",
"request",
"request",
"request",
"request",
"request",
"request",
"request",
"non-arg",
"non-arg",
"request",
"request",
"request",
"request",
"request",
"quote",
"request",
"request",
"request",
"fact",
"request",
"request",
"request",
"request",
"request",
"evaluation",
"non-arg",
"non-arg",
"request",
"evaluation",
"request",
"evaluation",
"fact",
"evaluation",
"evaluation",
"reference",
"quote",
"fact"
] |
Hym3oxKlf | [
"In this paper, the authors have proposed a GAN based method to conduct data augmentation. ",
"The cross-class transformations are mapped to a low dimensional latent space using conditional GAN. ",
"The paper is technically sound and the novelty is significant. ",
"The motivation of the proposed methods is clearly illustrated. ",
"Experiments on three datasets demonstrate the advantage of the proposed framework. ",
"However, this paper still suffers from some drawbacks as below:",
"(1)\tThe illustration of the framework is not clear enough. ",
"For example, in figure 3, it says the GAN is designed for “class c”, which is ambiguous whether the authors trained only one network for all class or trained multiple networks and each is trained on one class.",
"(2)\tSome details is not clearly given, such as the dimension of the Gaussian distribution, the dimension of the projected noise and .",
"(3)\tThe proposed method needs to sample image pairs in each class. ",
"As far as I am concerned, in most cases sampling strategy will affect the performance to some extent. ",
"The authors need to show the robustness to sampling strategy of the proposed method."
] | [
"fact",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation",
"request"
] |
SJyXoTtlG | [
"This paper introduces a generative approach for 3D point clouds.",
"More specifically, two Generative Adversarial approaches are introduced: Raw point cloud GAN, and Latent-space GAN (r-GAN and l-GAN as referred to in the paper).",
"In addition, a GMM sampling + GAN decoder approach to generation is also among the experimented variations.",
"The results look convincing for the generation experiments in the paper, both from class-specific (Figure 1) and multi-class generators (Figure 6).",
"The quantitative results also support the visuals.",
"One question that arises is whether the point cloud approaches to generation is any more valuable compared to voxel-grid based approaches.",
"Especially Octree based approaches [1-below] show very convincing and high-resolution shape generation results,",
"whereas the details seem to be washed out for the point cloud results presented in this paper.",
"I would like to see comparison experiments with voxel based approaches in the next update for the paper.",
"[1] @article{tatarchenko2017octree, title={Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs}, author={Tatarchenko, Maxim and Dosovitskiy, Alexey and Brox, Thomas}, journal={arXiv preprint arXiv:1703.09438}, year={2017} }"
] | [
"fact",
"fact",
"fact",
"evaluation",
"fact",
"request",
"evaluation",
"fact",
"request",
"reference"
] |
HkBIjt2xz | [
"Summary: This paper presents a derivation which links a DNN to recursive application of maximum entropy model fitting. ",
"The mathematical notation is unclear, ",
"and in one cases the lemmas are circular (i.e. two lemmas each assume the other is correct for their proof). ",
"Additionally the main theorem requires complete independence, ",
"but the second theorem provides pairwise independence, ",
"and the two are not the same.",
"Major comments: - The second condition of the maximum entropy equivalence theorem requires that all T are conditionally independent of Y. ",
"This statement is unclear, ",
"as it could mean pairwise independence, or it could mean jointly independent (i.e. for all pairs of non-overlapping subsets A & B of T I(T_A;T_B|Y) = 0).",
"This is the same as saying the mapping X->T is making each dimension of T orthogonal, as otherwise it would introduce correlations. ",
"The proof of the theorem assumes that pairwise independence induces joint independence ",
"and this is not correct.",
"- Section 4.1 makes an analogy to EM, ",
"but gradient descent is not like this process as all the parameters are updated at once, and only optimised by a single (noisy) step. ",
"The optimisation with respect to a single layer is conditional on all the other layers remaining fixed, ",
"but the gradient information is stale ",
"(as it knows about the previous step of the parameters in the layer above). ",
"This means that gradient descent does all 1..L steps in parallel, ",
"and this is different to the definition given.",
"- The proofs in Appendix C which are used for the statement I(T_i;T_j) >=I(T_i;T_j|Y) are incomplete, ",
"and in generate this statement is not true, ",
"so requires proof.",
"- Lemma 1 appears to assume Lemma 2, and Lemma 2 appears to assume Lemma 1.",
"Either these lemmas are circular or the derivations of both of them are unclear.",
"- In Lemma 3 what is the minimum taken over for the left hand side? ",
"Elsewhere the minimum is taken over T, but T does not appear on the left hand side.",
"Explicit minimums help the reader to follow the logic, ",
"and implicit ones should only be used when it is obvious what the minimum is over.",
"- In Lemma 5, what does \"T is only related to X\" mean? ",
"The proof states that Y -> T -> X forms a Markov chain, ",
"but this implies that T is a function of Y, not X.",
"Minor comments:- I assume that the E_{P(X,Y)} notation is the expectation of that probability distribution, ",
"but this notation is uncommon,",
"and should be replaced with a more explicit one.",
"- Markov is usually romanized with a \"k\" not a \"c\".",
"- The paper is missing numerous prepositions and articles, ",
"and contains multiple spelling mistakes & typos."
] | [
"fact",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"request",
"fact",
"evaluation",
"request",
"fact",
"evaluation",
"evaluation",
"request",
"fact",
"fact",
"fact",
"evaluation",
"request",
"evaluation",
"evaluation",
"fact"
] |
SkWQLvebf | [
"This paper proposes a deep learning (DL) approach (pre-trained CNNs) to the analysis of histopathological images for disease localization.",
"It correctly identifies the problem that DL usually requires large image databases to provide competitive results,",
"while annotated histopathological data repositories are costly to produce and not on that size scale.",
"It also correctly identifies that this is a daunting task for human medical experts",
"and therefore one that could surely benefit from the use of automated methods like the ones proposed.",
"The study seems sound from a technical viewpoint to me",
"and its contribution is incremental, as it builds on existing research,",
"which is correctly identified.",
"Results are not always too impressive,",
"but authors seem intent on making them useful for pathogists in practice",
"(an intention that is always worth the effort).",
"I think the paper would benefit from a more explicit statement of its original contributions (against contextual published research)",
"Minor issues: Revise typos (e.g. title of section 2)",
"Please revise list of references",
"(right now a mess in terms of format, typos, incompleteness"
] | [
"fact",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"evaluation",
"request",
"request",
"request",
"evaluation"
] |
ry_xOQ5ef | [
"This paper creates adversarial images by imposing a flow field on an image such that the new spatially transformed image fools the classifier. ",
"They minimize a total variation loss in addition to the adversarial loss to create perceptually plausible adversarial images, ",
"this is claimed to be better than the normal L2 loss functions.",
"Experiments were done on MNIST, CIFAR-10, and ImageNet, ",
"which is very useful to see that the attack works with high dimensional images. ",
"However, some numbers on ImageNet would be helpful ",
"as the high resolution of it make it potentially different than the low-resolution MNIST and CIFAR.",
"It is a bit concerning to see some parts of Fig. 2. ",
"Some of Fig. 2 (especially (b)) became so dotted that it no longer seems an adversarial that a human eye cannot detect. ",
"And model B in the appendix looks pretty much like a normal model. ",
"It might needs some experiments, either human studies, or to test it against an adversarial detector, to ensure that the resulting adversarials are still indeed adversarials to the human eye. ",
"Another good thing to run would be to try the 3x3 average pooling restoration mechanism in the following paper:",
"Xin Li, Fuxin Li. Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics . ICCV 2017.",
"to see whether this new type of adversarial example can still be restored by a 3x3 average pooling the image ",
"(I suspect that this is harder to restore by such a simple method than the previous FGSM or OPT-type, but we need some numbers).",
"I also don't think FGSM and OPT are this bad in Fig. 4. ",
"Are the authors sure that if more regularization are used these 2 methods no longer fool the corresponding classifiers?",
"I like the experiment showing the attention heat maps for different attacks. ",
"This experiment shows that the spatial transforming attack (stAdv) changes the attention of the classifier for each target class, and is robust to adversarially trained Inception v3 unlike other attacks like FGSM and CW. ",
"I would likely upgrade to a 7 if those concerns are addressed."
] | [
"fact",
"fact",
"fact",
"fact",
"evaluation",
"request",
"fact",
"evaluation",
"fact",
"evaluation",
"request",
"request",
"reference",
"fact",
"evaluation",
"evaluation",
"non-arg",
"evaluation",
"fact",
"non-arg"
] |
r1cIB5Fxf | [
"Paper proposes to use a convolutional network with 3 layers (convolutional + maxpoolong + fully connected layers) to embed time series in a new space such that an Euclidian distance is effective to perform a classification. ",
"The algorithm is simple and experiments show that it is effective on a limited benchmark. ",
"It would be interesting to enlarge the dataset to be able to compare statistically the results with state-of-the-art algorithms. ",
"In addition, Authors compare themselves with time series metric learning and generalization of DTW algorithms. ",
"It would also be interesting to compare with other types of time series classification algorithms (Bagnall 2016) ."
] | [
"fact",
"evaluation",
"request",
"fact",
"request"
] |
B1m1clFlM | [
"This paper presents MAd-RL, a method for decomposition of a single-agent RL problem into a simple sub-problems, and aggregating them back together.",
"Specifically, the authors propose a novel local planner - emphatic, and analyze the newly proposed local planner along of two existing ones - egocentric and agnostic.",
"The MAd-RL, and theoretical analysis, is evaluated on the Pac-Boy task, and compared to DQN and Q-learning with function approximation.",
"Pros: 1. The paper is well written, and well-motivated.",
"2. The authors did an extraordinary job in building the intuition for the theoretical work, and giving appropriate examples where needed.",
"3. The theoretical analysis of the paper is extremely interesting.",
"The observation that a linearly weighted reward, implies linearly weighted Q function, analysis of different policies, and local minima that result is the strongest and the most interesting points of this paper.",
"Cons:1. The paper is too long.",
"14 pages total - 4 extra pages (in appendix) over the 8 page limit,",
"and 1 extra page of references.",
"That is 50% overrun in the context,",
"and 100% overrun in the references.",
"The most interesting parts and the most of the contributions are in the Appendix,",
"which makes it hard to assess the contributions of the paper.",
"There are two options: 1.1 If the paper is to be considered as a whole, the excessive overrun gives this paper unfair advantage over other ICLR papers.",
"The flavor and scope and quality of the problems that can be tackled with 50% more space is substantially different from what can be addressed within the set limit.",
"If the extra space is necessary, perhaps this paper is better suited for another publication?",
"1.2 If the paper is assessed only based on the main part without Appendix, then the only novelty is emphatic planner, and the theoretical claims with no proofs.",
"The results are interesting,",
"but are lacking implementation details.",
"Overall, a substandard paper.",
"2. Experiments are disjoint from the method’s section.",
"For example:2.1 Section 5.1 is completely unrelated with the material presented in Section 4.",
"2.2 The noise evaluation in Section 5.3 is nice,",
"but not related with the Section 4.",
"This is problematic because, it is not clear if the focus of the paper is on evaluating MAd-RL and performance on the Ms.PacMan task, or experimentally demonstrating claims in Section 4.",
"Recommendations:1. Shorten the paper to be within (or close to the recommended length) including Appendix.",
"2. Focus paper on the analysis of the advisors,",
"and Section 5. on demonstrating the claims.",
"3. Be more explicit about the contributions.",
"4. How does the negative reward influence the behavior the agent?",
"The agent receives negative reward when near ghosts.",
"5. Move the short (or all) proofs from Appendix into the main text.",
"6. Move implementation details of the experiments (in particular the short ones) into the main text.",
"7. Use the standard terminology (greedy and random policies vs. egoistic and agnostic) where possible.",
"The new terms for well-established make the paper needlessly more complex.",
"8. Focus the literature review on the most relevant work, and contrast the proposed work with existing peer reviewed methods.",
"9. Revise the literature to emphasize more recent peer reviewed references.",
"Only three references are recent (less than 5 years), peer reviewed references,",
"while there are 12 historic references.",
"Try to reduce dependencies on non-peer reviewed references (~10 of them).",
"10. Make a pass through the paper, and decouple it from the van Seijen et al., 2017a",
"11. Minor: Some claims need references:",
"11.1 Page 5: “egocentric sub-optimality does not come from the actions that are equally good, nor from the determinism of the policy, since adding randomness…” -",
"Wouldn’t adding epsilon-greediness get the agent unstuck?",
"11.2 Page 1. “It is shown on the navigation task ….” -",
"This seems to be shown later in the results,",
"but in the intro it is not clear if some other work, or this one shows it.",
"12. Minor:12.1 Mix genders when talking about people.",
"Don’t assume all people that make “complex and important problems”, or who are “consulted for advice”, are male.",
"12.2 Typo: Page 5: a_0 sine die",
"12.3 Page 7 - omit results that are not shown",
"12.4 Make Figures larger - it is difficult, if not impossible to see",
"12.5 What is the difference between Pac-Boy and Ms. Pacman task? And why not use Ms. Packman?"
] | [
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"request",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"request",
"request",
"request",
"request",
"non-arg",
"fact",
"request",
"request",
"request",
"evaluation",
"request",
"request",
"evaluation",
"fact",
"request",
"request",
"request",
"quote",
"non-arg",
"quote",
"fact",
"evaluation",
"fact",
"request",
"fact",
"fact",
"request",
"non-arg"
] |
HJIPOSAbf | [
"The paper develops an interesting approach for solving multi-class classification with softmax loss.",
"The key idea is to reformulate the problem as a convex minimization of a \"double-sum\" structure via a simple conjugation trick. ",
"SGD is applied to the reformulation: in each step samples a subset of the training samples and labels, which appear both in the double sum. ",
"The main contributions of this paper are: \"U-max\" idea (for numerical stability reasons) and an \"\"proposing an \"implicit SGD\" idea.",
"Unlike the first review, I see what the term \"exact\" in the title is supposed to mean. ",
"I believe this was explained in the paper. ",
"I agree with the second reviewer that the approach is interesting. ",
"However, I also agree with the criticism ",
"(double sum formulations exist in the literature; ",
"comments about experiments); ",
"and will not repeat it here. ",
"I will stress though that the statement about Newton in the paper is not justified. ",
"Newton method does not converge globally with linear rate. ",
"Cubic regularisation is needed for global convergence. ",
"Local rate is quadratic. ",
"I believe the paper could warrant acceptance if all criticism raised by reviewer 2 is addressed.",
"I apologise for short and late review: I got access to the paper only after the original review deadline."
] | [
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"non-arg",
"non-arg",
"fact",
"fact",
"fact",
"fact",
"evaluation",
"non-arg"
] |
HJecicqxG | [
"In conventional boosting methods, one puts a weight on each sample.",
"The wrongly classified samples get large weights such that in the next round those samples will be more likely to get right.",
"Thus the learned weak learner at this round will make different mistakes.",
"This idea however is difficult to be applied to deep learning with a large amount of data.",
"This paper instead designed a new boosting method which puts large weights on the category with large error in this round.",
"In other words samples in the same category will have the same weight",
"Error bound is derived.",
"Experiments show its usefulness",
"though experiments are limited"
] | [
"fact",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"fact",
"evaluation"
] |
rJ9BTHFez | [
"Summary ******* The paper provides a collection of existing results in statistics.",
"Comments ******** Page 1: references to Q-learning and Policy-gradients look awkwardly recent, ",
"given that these have been around for several decades.",
"I dont get what is the novelty in this paper. ",
"There is no doubt that all the tools that are detailed here are extremely useful and powerful results in mathematical statistics. ",
"But they are all known.",
"The Gibbs variational principle is folklore, ",
"Proposition 1,2 are available in all good text books on the topic, ",
"and Proposition 4 is nothing but a transportation Lemma.",
"Now, Proposition 3 is about soft-Bellman operators. ",
"This perhaps is less standard ",
"because contraction property of soft-Bellman operator in infinite norm is more recent than for Bellman operators.",
"But as mentioned by the authors, this is not new either. ",
"Also I don't really see the point of providing the proofs of these results in the main material, and not for instance in appendix, ",
"as there is no novelty either in the proof techniques.",
"I don't get the sentence \"we have restricted so far the proof in the bandit setting\": ",
"bandits are not even mentioned earlier.",
"Decision ******** I am sorry but unless I missed something (that then should be clarified) this seems to be an empty paper: Strong reject."
] | [
"fact",
"evaluation",
"fact",
"evaluation",
"evaluation",
"fact",
"evaluation",
"fact",
"fact",
"fact",
"evaluation",
"fact",
"fact",
"evaluation",
"evaluation",
"evaluation",
"fact",
"evaluation"
] |
BJ1X3tYgf | [
"The paper treats the interesting problem of long term video prediction in complex video streams. ",
"I think the approach of adding more structure to their representation before making longer term prediction is also a reasonable one. ",
"Their approach combines an RNN that predicts an encoding of scene and then generating an image prediction using a VAN (Reed et al.). ",
"They show some results on the Human3.6M and the Robot Push dataset. ",
"I find the submission lacking clarity in many places. ",
"The main lack of clarity source I think is about what the contribution is. ",
"There are sparse mentions in the introduction ",
"but I think it would be much more forceful and clear if they would present VAN or Villegas et al method separately and then put the pieces together for their method in a separate section. ",
"This would allow the author to clearly delineate their contribution and maybe why those choices were made. ",
"Also the use of hierarchical is non-standard and leads to confusion I recommend maybe \"semantical\" or better \"latent structured\" instead. ",
"Smaller ambiguities in wording are also in the paper : ",
"e.g. related work -> long term prediction \"in this work\" refers to the work mentioned but could as well be the work that they are presenting. ",
"I find some of the claims not clearly backed by a thorough evaluation and analysis. ",
"Claiming to be able to produce encodings of scenes that work well at predicting many steps into the future is a very strong claim. ",
"I find the few images provided very little evidence for that fact. ",
"I think a toy example where this is clearly the case ",
"because we know exactly the factors of variations and they are inferred by the algorithm automatically or some better ones are discovered by the algorithm, ",
"that would make it a very strong submission. ",
"Reed et al. have a few examples that could be adapted to this setting and the resulting representation, analyzed appropriately, would shed some light into whether this is the right approach for long term video prediction and what are the nobs that should be tweaked in this system. ",
"In the current format, I think that the authors are on a good path ",
"and I hope my suggestions will help them improve their submission, ",
"but as it stands I recommend rejection from this conference."
] | [
"fact",
"evaluation",
"fact",
"fact",
"evaluation",
"evaluation",
"fact",
"request",
"evaluation",
"request",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation",
"evaluation"
] |