id
stringlengths 11
20
| paper_text
stringlengths 29
163k
| review
stringlengths 666
24.3k
|
---|---|---|
nips_2017_3302 | Modulating early visual processing by language
It is commonly assumed that language refers to high-level visual concepts while leaving low-level visual processing unaffected. This view dominates the current literature in computational models for language-vision tasks, where visual and linguistic inputs are mostly processed independently before being fused into a single representation. In this paper, we deviate from this classic pipeline and propose to modulate the entire visual processing by a linguistic input. Specifically, we introduce Conditional Batch Normalization (CBN) as an efficient mechanism to modulate convolutional feature maps by a linguistic embedding. We apply CBN to a pre-trained Residual Network (ResNet), leading to the MODulatEd ResNet (MODERN) architecture, and show that this significantly improves strong baselines on two visual question answering tasks. Our ablation study confirms that modulating from the early stages of the visual processing is beneficial. | The paper proposes a novel method called conditional batch normalization (CBN) to be applied on top of existing visual question answering models in order to modulate the visual processing with language information from the question in the early stages. In the proposed method, only the parameters of the batch norm layer of a pre-trained CNN are updated with the VQA loss by conditioning them on the LSTM embedding of the input question.
The paper evaluates the effectiveness of CBN on two VQA datasets – the VQA dataset from Antol et al., ICCV15 and the GuessWhat?! dataset from Vries et al., CVPR17. The experimental results show that CBN helps improve the performance on VQA by significant amount. The paper also studies the effectiveness of adding CBN to different layers and shows that adding CBN to last (top) 2 layers of CNN helps the most. The paper also shows quantitatively that the improvements in VQA performance are not merely due to fine-tuning of CNN by showing that the proposed model performs better than a model in which the Batch Norm parameters are fine-tuned but without conditioning on the language. Hence demonstrating that modulating with language helps.
Strengths:
1. The paper is well-motivated and the idea of modulating early visual processing by language is novel and interesting for VQA task.
2. The proposed contribution (CBN) can be added on top of any existing VQA model, hence making it widely applicable.
3. The ablation studies are meaningful and are informative about how much of early modulation by language helps.
4. The paper provides the details of the hyper-parameters, hence making the work reproducible.
Weaknesses:
1. The main contribution of the paper is CBN. But the experimental results in the paper are not advancing the state-of-art in VQA (on the VQA dataset which has been out for a while and a lot of advancement has been made on this dataset), perhaps because the VQA model used in the paper on top of which CBN is applied is not the best one out there. But in order to claim that CBN should help even the more powerful VQA models, I would like the authors to conduct experiments on more than one VQA model – favorably the ones which are closer to state-of-art (and whose codes are publicly available) such as MCB (Fukui et al., EMNLP16), HieCoAtt (Lu et al., NIPS16). It could be the case that these more powerful VQA models are already so powerful that the proposed early modulating does not help. So, it is good to know if the proposed conditional batch norm can advance the state-of-art in VQA or not.
2. L170: it would be good to know how much of performance difference this (using different image sizes and different variations of ResNets) can lead to?
3. In table 1, the results on the VQA dataset are reported on the test-dev split. However, as mentioned in the guidelines from the VQA dataset authors (http://www.visualqa.org/vqa_v1_challenge.html), numbers should be reported on test-standard split because one can overfit to test-dev split by uploading multiple entries.
4. Table 2, applying Conditional Batch Norm to layer 2 in addition to layers 3 and 4 deteriorates performance for GuessWhat?! compared to when CBN is applied to layers 4 and 3 only. Could authors please throw some light on this? Why do they think this might be happening?
5. Figure 4 visualization: the visualization in figure (a) is from ResNet which is not finetuned at all. So, it is not very surprising to see that there are not clear clusters for answer types. However, the visualization in figure (b) is using ResNet whose batch norm parameters have been finetuned with question information. So, I think a more meaningful comparison of figure (b) would be with the visualization from Ft BN ResNet in figure (a).
6. The first two bullets about contributions (at the end of the intro) can be combined together.
7. Other errors/typos:
a. L14 and 15: repetition of word “imagine”
b. L42: missing reference
c. L56: impact -> impacts
Post-rebuttal comments:
The new results of applying CBN on the MRN model are interesting and convincing that CBN helps fairly developed VQA models as well (the results have not been reported on state-of-art VQA model). So, I would like to recommend acceptance of the paper.
However I still have few comments --
1. It seems that there is still some confusion about test-standard and test-dev splits of the VQA dataset. In the rebuttal, the authors report the performance of the MCB model to be 62.5% on test-standard split. However, 62.5% seems to be the performance of the MCB model on the test-dev split as per table 1 in the MCB paper (https://arxiv.org/pdf/1606.01847.pdf).
2. The reproduced performance reported on MRN model seems close to that reported in the MRN paper when the model is trained using VQA train + val data. I would like the authors to clarify in the final version if they used train + val or just train to train the MRN and MRN + CBN models. And if train + val is being used, the performance can't be compared with 62.5% of MCB because that is when MCB is trained on train only. When MCB is trained on train + val, the performance is around 64% (table 4 in MCB paper).
3. The citation for the MRN model (in the rebuttal) is incorrect. It should be --
@inproceedings{kim2016multimodal,
title={Multimodal residual learning for visual qa},
author={Kim, Jin-Hwa and Lee, Sang-Woo and Kwak, Donghyun and Heo, Min-Oh and Kim, Jeonghee and Ha, Jung-Woo and Zhang, Byoung-Tak},
booktitle={Advances in Neural Information Processing Systems},
pages={361--369},
year={2016}
}
4. As AR2 and AR3, I would be interested in seeing if the findings from ResNet carry over to other CNN architectures such as VGGNet as well. |
nips_2017_3164 | Scalable Planning with Tensorflow for Hybrid Nonlinear Domains
Given recent deep learning results that demonstrate the ability to effectively optimize high-dimensional non-convex functions with gradient descent optimization on GPUs, we ask in this paper whether symbolic gradient optimization tools such as Tensorflow can be effective for planning in hybrid (mixed discrete and continuous) nonlinear domains with high dimensional state and action spaces? To this end, we demonstrate that hybrid planning with Tensorflow and RMSProp gradient descent is competitive with mixed integer linear program (MILP) based optimization on piecewise linear planning domains (where we can compute optimal solutions) and substantially outperforms state-of-the-art interior point methods for nonlinear planning domains. Furthermore, we remark that Tensorflow is highly scalable, converging to a strong plan on a large-scale concurrent domain with a total of 576,000 continuous action parameters distributed over a horizon of 96 time steps and 100 parallel instances in only 4 minutes. We provide a number of insights that clarify such strong performance including observations that despite long horizons, RMSProp avoids both the vanishing and exploding gradient problems. Together these results suggest a new frontier for highly scalable planning in nonlinear hybrid domains by leveraging GPUs and the power of recent advances in gradient descent with highly optimized toolkits like Tensorflow. | This paper describes a gradient ascent approach to action selection for planning domains with both continuous and discrete actions. The paper shows that backpropagation can be used to optimize the actions while avoiding long horizon problems such as exploding or vanishing gradients. Experimental results are given for 2D planar navigation, reservoir control and HVAC control and the proposed approach is shown to outperform existing hybrid optimization techniques.
Planning in hybrid domains is an important problem and existing methods do not scale to real world domains. This paper proposes a scalable solution to these problems and demonstrates its usefulness on several domains including two real world domains. As such, I believe it makes an important contribution to addressing real world planning problems. The paper is well written except for some minor comments (see below).
Evidence that the proposed solution works comes in the form of several experiments that demonstrate scalability and solution quality.
I have two concerns about the experiments that I would like to see the author's comments on:
1. The paper lacks any statistical significance results, error bars, or other statistics of variation.
2. Are the experimental baselines the state-of-the-art of hybrid domain planning problems? The introduction mentions other methods but justification is not given for not making an empirical comparison.
This paper needs a distinct related work section. The current treatment of related work is limited and is just in the introduction.
My last major question is whether or not Tensorflow is a critical part of the proposed method? It seems that what is important is the use of backpropagation and Tensorflow is just an implementation detail. I think the impact of the work might be more if the method was presented more generally. Or is there something I'm missing that makes Tensorflow essential?
Overall, the work presents a useful method for planning in hybrid domain problems. Its main limitations are the handling of related literature and statistical significance. Clarifying these issues and addressing the minor comments given below would help improve the impact of the work.
I have read the author's response and other reviews.
Minor comments:
Line 35: I.e., -> i.e.,
Line 35: Should cite Watkins for Q-learning
Line 106: Do you mean optimizing V directly?
Line 117: What do you mean by transfer function? Did you mean activation function?
Line 134: This statement seems too strong? Why are there always constraints on the actions? Is it more appropriate to say, "planning problems frequently have constraints on the actions that can be taken"?
Line 164 - 167: The top line (d_t = ||s_t - s||) is too far over.
Line 170: It is not clear what the dynamics are for linear and bilinear. It would be useful to write them explicitly as done for the nonlinear dynamics.
Figure 3: This figure would be more clear if the y-axis was in the correct direction. It would also be useful to have a statement such as "lower is better" so the reader can quickly compare TF to the other baselines.
Figure 3: Since the MILP solution is optimal it would be more useful to see relative error values for TF and Heuristic. Absolute reward isn't as informative since its sensitive to reward scaling.
All figures: It would be nice to see error bars on all applicable plots
Line 235: "to for" -> for
Figure 6: space before (b)
All figures: check for black and white printing, especially figure 6
Line 262: RMSProb -> RMSProp |
nips_2017_3165 | Boltzmann Exploration Done Right
Boltzmann exploration is a classic strategy for sequential decision-making under uncertainty, and is one of the most standard tools in Reinforcement Learning (RL). Despite its widespread use, there is virtually no theoretical understanding about the limitations or the actual benefits of this exploration scheme. Does it drive exploration in a meaningful way? Is it prone to misidentifying the optimal actions or spending too much time exploring the suboptimal ones? What is the right tuning for the learning rate? In this paper, we address several of these questions for the classic setup of stochastic multi-armed bandits. One of our main results is showing that the Boltzmann exploration strategy with any monotone learning-rate sequence will induce suboptimal behavior. As a remedy, we offer a simple non-monotone schedule that guarantees near-optimal performance, albeit only when given prior access to key problem parameters that are typically not available in practical situations (like the time horizon T and the suboptimality gap ∆). More importantly, we propose a novel variant that uses different learning rates for different arms, and achieves a distribution-dependent regret bound of order K log 2 T ∆ and a distributionindependent bound of order √ KT log K without requiring such prior knowledge. To demonstrate the flexibility of our technique, we also propose a variant that guarantees the same performance bounds even if the rewards are heavy-tailed.
in reinforcement learning [23,25,14,26,16,18]. In the multiarmed bandit literature, exponentialweights algorithms are also widespread, but they typically use importance-weighted estimators for the rewards -see, e.g., [6,8] (for the nonstochastic setting), [12] (for the stochastic setting), and [20] (for both stochastic and nonstochastic regimes). The theoretical behavior of these algorithms is generally well understood. For example, in the stochastic bandit setting Seldin and Slivkins [20] show a regret bound of order
, where ∆ is the suboptimality gap (i.e., the smallest difference between the mean reward of the optimal arm and the mean reward of any other arm).
In this paper, we aim to achieve a better theoretical understanding of the basic variant of the Boltzmann exploration policy that relies on the empirical mean rewards. We first show that any monotone learning-rate schedule will inevitably force the policy to either spend too much time drawing suboptimal arms or completely fail to identify the optimal arm. Then, we show that a specific non-monotone schedule of the learning rates can lead to regret bound of order K log T ∆ 2 . However, the learning schedule has to rely on full knowledge of the gap ∆ and the number of rounds T . Moreover, our negative result helps us to identify a crucial shortcoming of the Boltzmann exploration policy: it does not reason about the uncertainty of the empirical reward estimates. To alleviate this issue, we propose a variant that takes this uncertainty into account by using separate learning rates for each arm, where the learning rates account for the uncertainty of each reward estimate. We show that the resulting algorithm guarantees a distribution-dependent regret bound of order
, and a distribution-independent bound of order √ KT log K.
Our algorithm and analysis is based on the so-called Gumbel-softmax trick that connects the exponential-weights distribution with the maximum of independent random variables from the Gumbel distribution. | Pros:
- A systematic study on the classical Boltzmann exploration heuristic in the context of multi-armed bandit. The results provide useful insights to the understanding of Boltzmann exploration and multi-armed bandits
- The paper is clearly written
Cons:
- The technique is incremental, and the technical contribution to multi-armed bandit research is small.
The paper studiee Boltzmann exploration heuristic for reinforcement learning, namely use empirical means and exponential weight to probabilistically select actions (arms) in the context of multi-armed bandit. The purpose of the paper is to achieve property theoretical understanding of the Boltzmann exploration heuristic. I view that the paper achieves this goal by several useful results. First, the authors show that the standard Boltzmann heuristic may not achieve good learning result, in fact, the regret could be linear, when using monotone learning rates. Second, the authors show that, if the learning rate remain constant for a logarithmic number of steps and then increase, the regret is close to the optimal one. This learning strategy is essentially explore-then-commit strategy, but the catch is that it needs to know the critical problem parameter \Delta and T, which are typically unknown. Third, the authors propose to generalize the Boltzmann exploration by allowing individual learning rates for different arms based on their certainty, and show that this leads to good regret bounds.
The above serious of results provide good understanding of Boltzmann exploration. In particular, it provides the theoretical insight that the naive Boltzmann exploration lacks control on the uncertainty of arms, so may not preform well. This insight may be useful in the more general setting of reinforcement learning.
The paper is in general well written and easy to follow.
The technical novelty of the paper is incremental. The analysis are based on existing techniques. The new technical contribution to the multi-armed bandit research is likely to be small, since there are already a number of solutions achieving optimal or new optimal regret bounds.
Minor comments:
- line 243, log_+(\cdot) = min{0, \cdot}. Should it be max instead of min? |
nips_2017_3303 | Learning Mixture of Gaussians with Streaming Data
In this paper, we study the problem of learning a mixture of Gaussians with streaming data: given a stream of N points in d dimensions generated by an unknown mixture of k spherical Gaussians, the goal is to estimate the model parameters using a single pass over the data stream. We analyze a streaming version of the popular Lloyd's heuristic and show that the algorithm estimates all the unknown centers of the component Gaussians accurately if they are sufficiently separated. Assuming each pair of centers are Cσ distant with C = Ω((k log k) 1/4 σ) and where σ 2 is the maximum variance of any Gaussian component, we show that asymptotically the algorithm estimates the centers optimally (up to certain constants); our center separation requirement matches the best known result for spherical Gaussians [18]. For finite samples, we show that a bias term based on the initial estimate decreases at O(1/poly(N )) rate while variance decreases at nearly optimal rate of σ 2 d/N . Our analysis requires seeding the algorithm with a good initial estimate of the true cluster centers for which we provide an online PCA based clustering algorithm. Indeed, the asymptotic per-step time complexity of our algorithm is the optimal d · k while space complexity of our algorithm is O(dk log k). In addition to the bias and variance terms which tend to 0, the hard-thresholding based updates of streaming Lloyd's algorithm is agnostic to the data distribution and hence incurs an approximation error that cannot be avoided. However, by using a streaming version of the classical (soft-thresholding-based) EM method that exploits the Gaussian distribution explicitly, we show that for a mixture of two Gaussians the true means can be estimated consistently, with estimation error decreasing at nearly optimal rate, and tending to 0 for N → ∞. | This paper proposes a LLoyd type method with PCA initialization to estimate means of Gaussians in a streaming setting. References seem to be severely lacking as the domain is wide. The key point of the algorithm seems to be the initialization and there is no discussion on this part (comparison with the literature on k-means initialization). While the technical results might be interesting I have difficulties commenting on them without the proper background on the subject. I find as well that the technical exposition of proofs is not very clear (notations, chaining of arguments).
Other concerns
1. Line 116 : What about the estimation of variance and weights?
2. Algorithm 1 : How do you know N in a streaming setting?
3. Theorem 1 : The error bound assumes that estimated centers ans true centers are matched. The bound is a k-means type bound. Hence, the title of the paper is misleading as it should talk about k-means clustering of Gaussian mixtures. The estimation of sigma is vaguely discussed in Section 6.
4. Section 6 looks like a very partial result with no appropriate discussion on why it is interesting. |
nips_2017_2504 | Context Selection for Embedding Models
Word embeddings are an effective tool to analyze language. They have been recently extended to model other types of data beyond text, such as items in recommendation systems. Embedding models consider the probability of a target observation (a word or an item) conditioned on the elements in the context (other words or items). In this paper, we show that conditioning on all the elements in the context is not optimal. Instead, we model the probability of the target conditioned on a learned subset of the elements in the context. We use amortized variational inference to automatically choose this subset. Compared to standard embedding models, this method improves predictions and the quality of the embeddings. | The authors propose an extension to the Exponential Family Embeddings (EFE) model for producing low dimensional representations of graph data based on its context (EFE extends word2vec-style word embedding models to other data types such as counts or real number by using embedding-context scores to produce the natural parameters of various exponential family distributions). They note that while context-based embedding models have been extensively researched, some contexts are more relevant than others for predicting a given target and informing its embedding.
This observation has been made for word embeddings in prior work, with [1] using a learned attention mechanism to form a weighted average of predictive token contexts and [2] learning part-of-speech-specific classifiers to produce context weights. Citations to this related work should be added to the paper. There has also been prior work that learns fixed position-dependent weights for each word embedding context, but I am not able to recall the exact citation.
The authors propose a significantly different model to this prior work, however, and use a Bayesian model with latent binary masks to choose the relevant context vectors, which they infer with an amortized neural variational inference method. Several technical novelties are introduced to deal with the sparsity of relevant contexts and the difficulties with discrete latent variables and variably-sized element selection.
They show that their method gives significant improvements in held-out pseudolikelihood on several exponential family embedding tasks, as well as small improvements in unsupervised discovery of movie genres. They also demonstrate qualitatively that the learned variational posterior over relevant contexts makes sensible predictions with an examination of chosen contexts on a dataset of Safeway food products.
The idea of the paper is a straightforward, intuitively appealing, clear improvement over the standard EFE model, and the technical challenges presented by inference and their solutions will be interesting to practitioners using large-scale variational models.
One interesting technical contribution is the architecture of the amortized inference network, which must deal with variable-sized contexts to predict a posterior over each latent Bernoulli masking variable, and solves the problem of variable-sized contexts using soft binning with Gaussian kernels.
Another is the use of posterior regularization to enforce a sparsity penalty on the selected variables. Presumably, because the Bernoulli priors on the mask variables are independent, they are insufficient to enforce the sort of "spiky" sparsity that we want in context selection, and they show significant improvements to held out likelihood by varying the posterior regularization on effectively the average number of selected contexts. Perhaps the authors would like to more explicitly comment on the trade-off between posterior regularization and prior tuning to obtain the desired sparsity.
Overall, while the evaluation in the paper is fairly limited, the paper presents a clear improvement to the Exponential Family Embeddings model with a sound probabilistic model and inference techniques that will be of interest to practitioners.
1. Ling et al. Not All Contexts Are Created Equal: Better Word Representations with Variable Attention. 2015
2. Liu et al. Part-of-Speech Relevance Weights for Learning Word Embeddings. 2016 |
nips_2017_1448 | Deep Supervised Discrete Hashing
With the rapid growth of image and video data on the web, hashing has been extensively studied for image or video search in recent years. Benefiting from recent advances in deep learning, deep hashing methods have achieved promising results for image retrieval. However, there are some limitations of previous deep hashing methods (e.g., the semantic information is not fully exploited). In this paper, we develop a deep supervised discrete hashing algorithm based on the assumption that the learned binary codes should be ideal for classification. Both the pairwise label information and the classification information are used to learn the hash codes within one stream framework. We constrain the outputs of the last layer to be binary codes directly, which is rarely investigated in deep hashing algorithm. Because of the discrete nature of hash codes, an alternating minimization method is used to optimize the objective function. Experimental results have shown that our method outperforms current state-of-the-art methods on benchmark datasets. | The paper proposes a supervised hashing algorithm based on a neural network. This network aims to output the binary codes while optimizing the loss for the classification error. To jointly optimize two losses, the proposed method adopts an alternating strategy to minimize the objectives. Experiments on two datasets show good performance compared to other hashing approaches, including the ones use deep learning frameworks.
Pros:
The paper is written well and easy to follow. Generally, the idea is well-motivated, where the proposed algorithm and optimization strategy are sound and effective.
Cons:
Some references are missing as shown below, especially that [1] shares similar ideas. It is necessary to discuss [1] and carry out in-depth performance evaluation to clearly demonstrate the merits of this work.
[1] Zhang et al., Efficient Training of Very Deep Neural Networks for Supervised Hashing,
CVPR16
[2] Liu et al., Deep Supervised Hashing for Fast Image Retrieval, CVPR16
There are missing details for training the network (e.g., hyper-parameters), so it may not be able to reproduce the results accurately. It would be better if the authors will release the code if the paper is accepted.
Since the method adopts an alternating optimization approach, it would be better to show the curve of losses and demonstrate the effectiveness of the convergence.
The study in Figure 1 is interesting. However, the results and explanations (Ln 229-233) are not consistent. For example, DSDH-B is not always better than DSDH-A. In addition, the reason that DSDH-C is only slightly better than DSDH-A does not sound right (Ln 231-232). It may be the cause of different binarization strategies, and would require more experiments to validate it.
Why is DSDH-B better than DSDH in Figure 1(a) when using more bits? |
nips_2017_2786 | EEG-GRAPH: A Factor-Graph-Based Model for Capturing Spatial, Temporal, and Observational Relationships in Electroencephalograms
This paper presents a probabilistic-graphical model that can be used to infer characteristics of instantaneous brain activity by jointly analyzing spatial and temporal dependencies observed in electroencephalograms (EEG). Specifically, we describe a factor-graph-based model with customized factor-functions defined based on domain knowledge, to infer pathologic brain activity with the goal of identifying seizure-generating brain regions in epilepsy patients. We utilize an inference technique based on the graph-cut algorithm to exactly solve graph inference in polynomial time. We validate the model by using clinically collected intracranial EEG data from 29 epilepsy patients to show that the model correctly identifies seizure-generating brain regions. Our results indicate that our model outperforms two conventional approaches used for seizure-onset localization (5-7% better AUC: 0.72, 0.67, 0.65) and that the proposed inference technique provides 3-10% gain in AUC (0.72, 0.62, 0.69) compared to sampling-based alternatives. | SUMMARY:
========
The authors propose a probabilistic model and MAP inference for localizing seizure onset zones (SOZ) using intracranial EEG data. The proposed model captures spatial correlations across EEG channels as well as temporal correlations within a channel. The authors claim that modeling these correlations leads to improved predictions when compared to detection methods that ignore temporal and spatial dependency.
PROS:
=====
This is a fairly solid applications paper, well-written, well-motivated, and an interesting application.
CONS:
=====
The proof of Prop. 1 is not totally clear, for example the energy in Eq. (4) includes a penalty for label disagreement across channels, which is absent in the the graph cut energy provided by the proof. The relationship between min-cut/max-flow and submodular pairwise energies is well established, and the authors should cite this literature (e.g. Kolmogorov and Zabih, PAMI 2004). Note that the higher-order temporal consistency term can be decomposed into pairwise terms for every pair of temporal states. It is unclear why this approach is not a valid one for showing optimality of min-cut and the authors should include an explanation.
What do the authors mean by "dynamic graph" (L:103)? Also, the edge set $E_n$ is indexed by epoch, suggesting graph structure is adjusted over time. It is not discussed anywhere how these edges are determined and whether they in fact change across epochs.
This estimate of SOZ probability is motivated in Eq. (5) as an MLE. It isn't clear that (5) is a likelihood as it is not a function of the data, only the latent states. The estimated probability of a location being an SOZ is given as an average over states across epochs, which is a valid estimator, and connections to MLE are unclear beyond that.
Generalizability claims of the approach (L:92-93) are weak. The extent to which this is a general model is simply that the model incorporates spatio-temporal dependencies. Specifying the factors encoding such dependencies in any factor graph will always require domain knowledge.
Some detailed comments:
* Are there really no free parameters in the model to scale energy weights? This seems surprising since the distance parameter in the spatial energy would need to be rescaled depending on the units used.
* Authors claim no other work incorporates spatio-temporal EEG constraints but a recent paper suggests otherwise: Martinez-Vargas et al., "Improved localization of seizure onset zones using spatiotemporal constraints and time-varying source connectivity.", Frontiers in Neuroscience (2017). Please cite relevant material.
* Lines in plots of Fig. 3 are too thin and do not appear on a printed version |
nips_2017_1942 | ExtremeWeather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events
Then detection and identification of extreme weather events in large-scale climate simulations is an important problem for risk management, informing governmental policy decisions and advancing our basic understanding of the climate system. Recent work has shown that fully supervised convolutional neural networks (CNNs) can yield acceptable accuracy for classifying well-known types of extreme weather events when large amounts of labeled data are available. However, many different types of spatially localized climate patterns are of interest including hurricanes, extra-tropical cyclones, weather fronts, and blocking events among others. Existing labeled data for these patterns can be incomplete in various ways, such as covering only certain years or geographic areas and having false negatives. This type of climate data therefore poses a number of interesting machine learning challenges. We present a multichannel spatiotemporal CNN architecture for semi-supervised bounding box prediction and exploratory data analysis. We demonstrate that our approach is able to leverage temporal information and unlabeled data to improve the localization of extreme weather events. Further, we explore the representations learned by our model in order to better understand this important data. We present a dataset, ExtremeWeather, to encourage machine learning research in this area and to help facilitate further work in understanding and mitigating the effects of climate change. The dataset is available at extremeweatherdataset.github.io and the code is available at https://github.com/eracah/hur-detect.
Deep neural networks, especially deep convolutional neural networks, have enjoyed breakthrough success in recent recent years, achieving state-of-the-art results on many benchmark datasets (Krizhevsky et al., 2012; and also compelling results on many practical tasks such as disease diagnosis (Hosseini-Asl et al., 2016), facial recognition (Parkhi et al., 2015), autonomous driving , and many others. Furthermore, deep neural networks have also been very effective in the context of unsupervised and semi-supervised learning; some recent examples include variational autoencoders (Kingma & Welling, 2013), adversarial networks (Goodfellow et al., 2014;Makhzani et al., 2015;Salimans et al., 2016;Springenberg, 2015), ladder networks (Rasmus et al., 2015) and stacked what-where autoencoders (Zhao et al., 2015).
There is a recent trend towards video datasets aimed at better understanding spatiotemporal relations and multimodal inputs (Kay et al., 2017;Gu et al., 2017;Goyal et al., 2017). The task of finding extreme weather events in climate data is similar to the task of detecting objects and activities in video -a popular application for deep learning techniques. An important difference is that in the case of climate data, the 'video' has 16 or more 'channels' of information (such as water vapour, pressure and temperature), while conventional video only has 3 (RGB). In addition, climate simulations do not share the same statistics as natural images. As a result, unlike many popular techniques for video, we hypothesize that we cannot build off successes from the computer vision community such as using pretrained weights from CNNs (Simonyan & Zisserman, 2014;Krizhevsky et al., 2012) pretrained on ImageNet (Russakovsky et al., 2015).
Climate data thus poses a number of interesting machine learning problems: multi-class classification with unbalanced classes; partial annotation; anomaly detection; distributional shift and bias correction; spatial, temporal, and spatiotemporal relationships at widely varying scales; relationships between variables that are not fully understood; issues of data and computational efficiency; opportunities for semi-supervised and generative models; and more. Here, we address multi-class detection and localization of four extreme weather phenomena: tropical cyclones, extra-tropical cyclones, tropical depressions, and atmospheric rivers. We implement a 3D (height, width, time) convolutional encoderdecoder, with a novel single-pass bounding-box regression loss applied at the bottleneck. To our knowledge, this is the first use of a deep autoencoding architecture for bounding-box regression. This architectural choice allows us to do semi-supervised learning in a very natural way (simply training the autoencoder with reconstruction for unlabelled data), while providing relatively interpretable features at the bottleneck. This is appealing for use in the climate community, as current engineered heuristics do not perform as well as human experts for identifying extreme weather events.
Our main contributions are (1) a baseline bounding-box loss formulation; (2) our architecture, a first step away from engineered heuristics for extreme weather events, towards semi-supervised learned features; (3) the ExtremeWeather dataset, which we make available in three benchmarking splits: one small, for model exploration, one medium, and one comprising the full 27 years of climate simulation output.
2 Related work 2.1 Deep learning for climate and weather data Climate scientists do use basic machine learning techniques, for example PCA analysis for dimensionality reduction (Monahan et al., 2009), and k-means analysis for clusterings Steinhaeuser et al. (2011). However, the climate science community primarily relies on expert engineered systems and ad-hoc rules for characterizing climate and weather patterns. Of particular relevance is the TECA (Toolkit for Extreme Climate Analysis) Prabhat et al. (2012Prabhat et al. ( , 2015, an application of large scale pattern detection on climate data using heuristic methods. A more detailed explanation of how TECA works is described in section 3. Using the output of TECA analysis (centers of storms and bounding boxes around these centers) as ground truth, (Liu et al., 2016) demonstrated for the first time that convolutional architectures could be successfully applied to predict the class label for two extreme weather event types. Their work considered the binary classification task on centered, cropped patches from 2D (single-timestep) multi-channel images. Like (Liu et al., 2016) we use TECA's output (centers and bounding boxes) as ground truth, but we build on the work of Liu et al. (2016) by: 1) using uncropped images, 2) considering the temporal axis of the data 3) doing multi-class bounding box detection and 4) taking a semi-supervised approach with a hybrid predictive and reconstructive model. Some recent work has applied deep learning methods to weather forecasting. Xingjian et al. (2015) have explored a convolutional LSTM architecture (described in 2.2 for predicting future precipitation on a local scale (i.e. the size of a city) using radar echo data. In contrast, we focus on extreme event detection on planetary-scale data. Our aim is to capture patterns which are very local in time (e.g. a hurricane may be present in half a dozen sequential frames), compared to the scale of our underlying climate data, consisting of global simulations over many years. As such, 3D CNNs seemed to make more sense for our detection application, compared to LSTMs whose strength is in capturing long-term dependencies. | This paper presents a new dataset, a model and experimental results on this dataset to address the task of extreme weather events detection and localization. The dataset is 27 year weather simulation sampled 8 times per day for 16 channels only the surface atmospheric level. The proposed model is based on 3D convolutional layers with an autoencoder architecture. The technique is semi-supervised, thus training with a loss that combines reconstruction error of the autoencoder and detection and localization from the middle code layer.
In general the paper is very well written and quite clear on most details. The experimental results only use a small part of the data and are a bit preliminary, but still they do show the potential that this data has for future research.
Some comments/concerns/suggestions are the following.
- Since the paper presents a new dataset it is important to include a link to where the data will be available.
- In line 141 it says that only surface quantities are considered without any explanation why. Is it because the simulation would have been significantly more expensive so only surface computed? Is it a decision of the authors to make the dataset size more manageable? Are there any future plans to make available the full 30 atmospheric levels?
- The caption of table 1 says "Note test statistics are omitted to preserve anonymity of the test data." I assume this means that the ground truth labels are not being made available with the rest of the data. If this is the case, in the paper it should be explained what is the plan for other researchers to evaluate the techniques they develop. Will there be some web server where results are submitted and the results are returned?
- From what I understand the ground truth labels for the four weather events considered were generated fully automatically. If so, why was only half of the data labeled? I agree that exploring semi-supervised methods is important, but this can be studied by ignoring the labels of part of the data and analyze the behavior of the techniques when they have available more or less labeled samples.
- Is there any plan to extend the data with weather events that cannot be labeled by TECA?
- VGG is mentioned in line 51 without explaining what it is. Maybe add a reference for readers that are not familiar with it.
- In figure 3 the the graphs have very tiny numbers. These seem useless so maybe remove them saving a bit of space to make the images slightly larger.
- Line 172 final full stop missing.
- Be consistent on the use of t-SNE, some places you have it with a dash and other places without a dash. |
nips_2017_3088 | SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability
We propose a new technique, Singular Vector Canonical Correlation Analysis (SVCCA), a tool for quickly comparing two representations in a way that is both invariant to affine transform (allowing comparison between different layers and networks) and fast to compute (allowing more comparisons to be calculated than with previous methods). We deploy this tool to measure the intrinsic dimensionality of layers, showing in some cases needless over-parameterization; to probe learning dynamics throughout training, finding that networks converge to final representations from the bottom up; to show where class-specific information in networks is formed; and to suggest new training regimes that simultaneously save computation and overfit less. | The paper presents an analysis of deep networks based on a combination of SVD and CCA to test similarity between representations at different layers. The authors use this technique to analyze aspects of the deep neural network training procedure, for example, how the representation builds bottom-up throughout training, and suggest an improved training procedure based on this analysis where lower-layers are frozen for most the training.
The paper is well-written and contributions are clearly stated.
The authors characterize the representation of a neuron by the vector of its activations for each data point, and then represent each layer as a matrix composed of the activation vectors for each neuron. Abstracting each layer as a kernel, and applying kernel SVD / CCA (with the linear kernel as a special case) would seem a more natural way to describe the same problem.
In section 2.1, it is unclear why the authors perform CCA on two networks instead of PCA on a single network, except for the sake of expressing the analysis as a CCA problem. It would be good to have PCA as a baseline.
While the bottom-up emergence of representation is an interesting effect, it is unclear whether the effect is general or is a consequence of the convnet structure of the model, in particular, the smaller number of parameters in the early layers.
In Figure 4, it seems that the representations have relatively low correlation until the very end where they jump to 1. It suggests noise in the training procedure for the top layers. It would be interesting to test whether the frozen layers training strategy lead to higher correlations also for the higher layers. |
nips_2017_3447 | Affinity Clustering: Hierarchical Clustering at Scale
Graph clustering is a fundamental task in many data-mining and machine-learning pipelines. In particular, identifying a good hierarchical structure is at the same time a fundamental and challenging problem for several applications. The amount of data to analyze is increasing at an astonishing rate each day. Hence there is a need for new solutions to efficiently compute effective hierarchical clusterings on such huge data. The main focus of this paper is on minimum spanning tree (MST) based clusterings. In particular, we propose affinity, a novel hierarchical clustering based on Borůvka's MST algorithm. We prove certain theoretical guarantees for affinity (as well as some other classic algorithms) and show that in practice it is superior to several other state-of-the-art clustering algorithms. Furthermore, we present two MapReduce implementations for affinity. The first one works for the case where the input graph is dense and takes constant rounds. It is based on a Massively Parallel MST algorithm for dense graphs that improves upon the state-of-the-art algorithm of Lattanzi et al. [34]. Our second algorithm has no assumption on the density of the input graph and finds the affinity clustering in O(log n) rounds using Distributed Hash Tables (DHTs). We show experimentally that our algorithms are scalable for huge data sets, e.g., for graphs with trillions of edges. | The paper focuses on the development of the field of distributed hierarchical clustering. The authors propose a novel class of algorithms tagged 'affinity clustering' that operate on the basis of Boruvka's seminal work on minimal spanning trees and contrast those to linkage clustering algorithms (which are based on Kruskal's work).
The authors systematically introduce the theoretical underpinnings of affinity clustering, before proposing 'certificates' as a metric to characterise clustering algorithm solutions more generally by assessing the clustered edge weights (cost).
Following the theoretical analysis and operationalisation of MapReduce variants of affinity clustering for distributed operation, the quality is assessed empirically using standard datasets with variants of linkage- and affinity-based algorithms, as well as k-means. In addition to the Rand index (as metric for clustering accuracy) the quality of algorithms is assessed based on the ratio of the detected clusters (with balanced cluster sizes considered favourable).
Affinity-based algorithms emerge favourably for nearly all datasets, with in parts significant improvements (with the flat clustering algorithm k-means as closest contestant).
Finally, the scalability of affinity clustering is assessed using private and public corpi, with near-linear scalability for the best performing case.
Overall, the paper proposes a wide range of concepts that extend the field of hierarchical clustering (affinity-based clustering, certificates as QA metric). As far as I could retrace, the contributions are systematically developed and analysed, which warrants visibility.
One (minor) comment includes the experimental evaluation. In cases where the affinity-based algorithms did not perform as well (an example is the cluster size ratio for the Digits dataset), it would have been great to elaborate (or even hypothesise) why this is the case. This would potentially give the reader a better understanding of the applicability of the algorithm and potential limitations.
Minor typos/omissions:
- Page 8, Table 1: "Numbered for ImageGraph are approximate."
- Page 8, Subsection Scalability: Missing reference for public graphs. |
nips_2017_2878 | Deep Learning for Precipitation Nowcasting: A Benchmark and A New Model
With the goal of making high-resolution forecasts of regional rainfall, precipitation nowcasting has become an important and fundamental technology underlying various public services ranging from rainstorm warnings to flight safety. Recently, the Convolutional LSTM (ConvLSTM) model has been shown to outperform traditional optical flow based methods for precipitation nowcasting, suggesting that deep learning models have a huge potential for solving the problem. However, the convolutional recurrence structure in ConvLSTM-based models is location-invariant while natural motion and transformation (e.g., rotation) are location-variant in general. Furthermore, since deep-learning-based precipitation nowcasting is a newly emerging area, clear evaluation protocols have not yet been established. To address these problems, we propose both a new model and a benchmark for precipitation nowcasting. Specifically, we go beyond ConvLSTM and propose the Trajectory GRU (TrajGRU) model that can actively learn the location-variant structure for recurrent connections. Besides, we provide a benchmark that includes a real-world large-scale dataset from the Hong Kong Observatory, a new training loss, and a comprehensive evaluation protocol to facilitate future research and gauge the state of the art. | Summary
This paper describes a new GRU-based architecture for precipitation nowcasting, a task which can be seen as a video prediction problem with a fixed camera position. The authors also describe a new benchmarking package for precipitation nowcasting and evaluate their model on both this benchmark and an altered version of moving MNIST.
Technical quality
I think this is a good application paper while the proposed architecture is interesting as well. For the sake of completeness I would have liked to see some comparisons with LSTM versions of the networks as well and with fully connected RNNs but the authors already did look at a nice selection of baselines.
Clarity
The paper is generally well-written and coherent in structure. While I found it easy to follow the definition of the ConvGRU, I found the description of the proposed TrajGRU much harder to understand. The most important change in the new model is the replacement of the state-to-state convolutions/multiplications with a transformation that seems to be a specific version of the type of module described by Jaderberg et al. (2015) (reference 10 in the paper). It is not clear to me how the structure generating network gamma produces the flow fields U and V exactly though. It would help if this was made more explicit and if there would be a more clear emphasis on the differences with the transformation module from Jaderberg et al. (2015). At least the goal of the trajectory selection mechanism itself is clear enough for the rest of the narrative to make sense.
Novelty
While the general architecture is the same as the ConvLSTM and the paper is somewhat application oriented, the idea of learning trajectories with spatial transformer inspired modules is quite original to me.
Significance
The new data set seems to be a useful benchmark but I know too little about precipitation nowcasting to judge this. I also don't know enough about the state-of-the-art in that field to judge the performance gains very well and since the benchmark is a new one I suppose time will need to tell. The work would have been stronger if there was also a comparison on another existing precipitation nowcasting benchmark. I think that the idea to learn sparsely connected temporal trajectories may have more widespread applications in other domains where the distributions to be modelled are of high dimensionality but display certain independence properties and redundance.
pros:
Interesting model
Nice baselines and practical evaluation
The paper is generally well-written.
cons:
The description and motivation of the model is a bit short and hard to follow.
The novelty is somewhat limited compared to ConvGRU/LSTM
EDIT: The rebuttal addressed some of the concerns I had about the paper and I think that the additional experiments also add to the overall quality so I increased my score. |
nips_2017_3374 | Learning Hierarchical Information Flow with Recurrent Neural Modules
We propose ThalNet, a deep learning model inspired by neocortical communication via the thalamus. Our model consists of recurrent neural modules that send features through a routing center, endowing the modules with the flexibility to share features over multiple time steps. We show that our model learns to route information hierarchically, processing input data by a chain of modules. We observe common architectures, such as feed forward neural networks and skip connections, emerging as special cases of our architecture, while novel connectivity patterns are learned for the text8 compression task. Our model outperforms standard recurrent neural networks on several sequential benchmarks. | The paper introduces a modular architecture that has a central routing module and other in/out modules, akin to the role of thalamus/neocortex regions in the brain. Initial experiments show promises.
I like the idea of the paper, which was nicely written. Some module details are missing (the read function r, merging function m and the module function f), possibly because the space limitation, but it would be very useful to have in a supplement.
The main cons are (i) the experiments are not super-strong; (ii) the search for the best module/routing functions is quite preliminary. However I believe that the direction is fruitful and worth exploring further.
In related work, there is a contemporary architecture with similar intention (although the specific targets were different, i.e., multi-X learning, where X = instance, view, task, label): "On Size Fit Many: Column Bundle for Multi-X Learning", Trang Pham, Truyen Tran, Svetha Venkatesh. arXiv preprint arXiv: 1702.07021. |
nips_2017_450 | Avoiding Discrimination through Causal Reasoning
Recent work on fairness in machine learning has focused on various statistical discrimination criteria and how they trade off. Most of these criteria are observational: They depend only on the joint distribution of predictor, protected attribute, features, and outcome. While convenient to work with, observational criteria have severe inherent limitations that prevent them from resolving matters of fairness conclusively. Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning. This viewpoint shifts attention from "What is the right fairness criterion?" to "What do we want to assume about our model of the causal data generating process?" Through the lens of causality, we make several contributions. First, we crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion. Second, our approach exposes previously ignored subtleties and why they are fundamental to the problem. Finally, we put forward natural causal non-discrimination criteria and develop algorithms that satisfy them. | The paper proposes a causal view on the question of fair machine learning. Its main contribution is the introduction of causal language to the problem, and specifically the notion of "resolving variable": a variable that mediates the causal effect of a protected attribute in a way the user deems "fair". An example would be the variable "choice of departments" in the famous Simpson's paradox college-admission scenario.
Overall I think this paper has a lot of potential, but still needs some work on clarifying many of the concepts introduced. Fairness is a difficult subject as it clearly goes beyond the math and into societal and normative discussions. As such, I am looking forward to the authors replies to my comments and questions below, hoping to have a fruitful discussion.
Pros:
1. This paper present an important contribution to the discussion on fairness, by introducing the notion of resolving variable. I have already found this term useful in discussions about the subject.
2. The paper shows how in relatively simple scenarios the proposed definitions and methods reduce to very reasonable solutions, for example the result in Theorem 2 and its corollaries.
Cons:
1. No experimental results, not even synthetic. Other papers in the field have experiments, e.g. Hardt Price & Srebro (2016) and Chulechova (2016).
2. Lack of clarity in distinction between proxies and non-resolving variables.
3. I am not convinced of the authors view on the role of proxy variables.
4. I am not convinced that some of the proposed definitions capture the notion of fairness the authors claim to capture.
Specific comments:
1. What exactly is the difference between a proxy variable and a non-resolving variable? Does the difference lie only in the way they are used?
2. Why is the definition of proxy discrimination (def. 3) the one we want?
Wouldn't the individual proxy discrimination be the one which more closely correlates with our ideas of fairness?
E.g., say name (variable P) affects past employment (variable X), which in turn affects the predictor. Couldn't averaging this effect over X's (even considering the do-operator version) lead to a predictor which is discriminating with respect to a specific X?
3. Why is the notion of proxy variables the correct one to control for? I think on the contrary, in many cases we *do* have access to the protected attribute (self-declared race, gender, religion etc.). It's true that conceiving intervention on these variables is difficult, but conceiving the paths through which these variables affect outcomes is more plausible. The reason gender affects employment goes beyond names: it relates to opportunities presented, different societal attitudes, and so far. The name is but one path (an unresolved path) through which gender can affect employment.
4. The wording in the abstract: "what do we want to assume about the causal DGP", seems odd. The causal DGP exists in the world, and we cannot in general change it. Do the authors mean the causal DGP *that gives rise to a predictor R* ?
5. Why have the mentioned approaches to individual fairness not gained traction?
6. The paper would be better if it mentioned similar work which recently appeared on arxiv by Nabi & Shpitser, "Fair Inference on Outcomes". In general I missed a discussion of mediation, which is a well known term in causal inference and seems to be what this paper is aiming at understanding.
7. I am confused by the relation between the predictions R and the true outcomes Y. Shouldn't the parameters leading to R be learned from data which includes Y ?
8. In figure 4, why does the caption say that "we generically do not want A to influence R"? Isn't the entire discussion premised on the notion that sometimes it's ok if A influences R, through a resolving variable?
Minor comments:
1. line 215 I think there's a typo, N_A should be N_P? |
nips_2017_2572 | Q-LDA: Uncovering Latent Patterns in Text-based Sequential Decision Processes
In sequential decision making, it is often important and useful for end users to understand the underlying patterns or causes that lead to the corresponding decisions. However, typical deep reinforcement learning algorithms seldom provide such information due to their black-box nature. In this paper, we present a probabilistic model, Q-LDA, to uncover latent patterns in text-based sequential decision processes. The model can be understood as a variant of latent topic models that are tailored to maximize total rewards; we further draw an interesting connection between an approximate maximum-likelihood estimation of Q-LDA and the celebrated Q-learning algorithm. We demonstrate in the text-game domain that our proposed method not only provides a viable mechanism to uncover latent patterns in decision processes, but also obtains state-of-the-art rewards in these games. | This paper targets on two text games and propose a new reinforcement learning framework Q-LDA to discover latent patterns in sequential decision process. The proposed model uses LDA to convert action space into a continuous representation and subsequently use Q-learning algorithm to iteratively make decision in a sequential manner.
Authors apply the proposed model to two different text games, and achieve better performance than previous proposed baseline models.
The paper is a little bit hard to follow with some missing or inconsistent information. The paper is not self-contained, for a reader that is not familiar with the problem domain, one may need to refer to the Appendix or prior works almost all the time.
Some detailed comments:
- I would suggest authors to include a detailed section highlighting the contribution of the paper.
- Authors provide some screenshots on the text game interface in the appendix material, but the information of the text games is still short. The reference [11] also doesn't provide much useful context neither. I would recommend authors to include some example text flow (at least in the Appendix) from these games to better illustrate the target scenario. What are and how many is the possible conversation flow of each text game?
- In the context of the game, the agent only receives a reward at the end of game. This is consistent with the text in line 74-75. However, in the graphically model shown in Figure 1. It seems like there is a reward after each turn. I assume this graphical illustration is for general process, but it should be nice to include an explanation in the text.
- In the graphically illustration, it is unclear to me which variables are observable and which are not. For example, all \beta_A and \beta_S are not observable. The rewards r_t's are not observable in my understanding.
- In the generative process, it occurs to me that the observation text W are generated following LDA process as well, but in the graphically model illustration, there is a missing plate in the figure.
- I would encourage authors to include a section of model complexity analysis, for example, what is the complexity of parameter space. Given the size of the dataset and the complexity of the proposed model, it is hard to judge if the learned policy is generalizable.
- In the experiments, what are the vocabulary size of the dataset, in terms of observed text and action text? In Table 2 and Figure 2, authors demonstrate some interesting outcome and observation from the generated topics. I am wondering are these topics being used in other episode, since these topics look very fine-grained and hence may not be applicable to other scenario.
- I would also recommend to add in a list of all topics (maybe in Appendix), rather than the cherry-picked ones, to display.
- In the experiments, authors mention that the upper bound for reward is 20 for "Saving John" and 30 for "Machine of Death". Are these reward numbers objectively specified in the game flow or assigned in a post-hoc manner? If latter, how do you justify this measure (especially if the reward are assigned upon game termination).
- Missing reference in line 76. Missing space in line 106. {D}irichlet in line 291.
%%%%%%%%%%%%%%%%%%%%%%%%
The authors' response have clarified some of my questions and they also agree to improve the paper to make it more self contained. I have adjusted the score accordingly. |
nips_2017_2655 | Learning Graph Representations with Embedding Propagation
We propose Embedding Propagation (EP), an unsupervised learning framework for graph-structured data. EP learns vector representations of graphs by passing two types of messages between neighboring nodes. Forward messages consist of label representations such as representations of words and other attributes associated with the nodes. Backward messages consist of gradients that result from aggregating the label representations and applying a reconstruction loss. Node representations are finally computed from the representation of their labels. With significantly fewer parameters and hyperparameters an instance of EP is competitive with and often outperforms state of the art unsupervised and semi-supervised learning methods on a range of benchmark data sets.
Despite its conceptual simplicity, we show that EP generalizes several existing machine learning methods for graph-structured data. Since EP learns embeddings by incorporating different label types (representing, for instance, text and images) it is a framework for learning with multi-modal data [31]. | The authors introduce embedding propagation (EP), a new message-passing method for learning representations of attributed vertices in graphs. EP computes vector representations of nodes from the 'labels' (sparse features) associated with nodes and their neighborhood. The learning of these representations is facilitated by two different types of messages sent along edges: a 'forward' message that sends the current representation of the node, and a 'backward' message that passes back the gradients of some differentiable reconstruction loss. The authors report results that are competitive with or outperform baseline representation learning methods such as deepwalk and node2vec.
Quality:
The quality of the paper is high. The experimental technique is clearly described, appears sound, and the authors report results that are competitive with strong baseline methods. In addition, the authors provide clear theoretical analysis of the worst-case complexity of their method.
Clarity:
The paper is generally clearly written and well-organized. The proposed model is clearly delineated.
Originality:
The work is moderately original - the authors draw inspiration from message-passing inference algorithms to improve on existing graph embedding methods.
Significance:
On the positive side, the proposed technique is creative and likely to be of interest to the NIPS community. Furthermore, the authors report results that are competitive or better when compared with strong baseline methods, and the computational complexity of the technique is low (linear in both the maximum degree of the graph and the number of nodes). However, while the results are competitive, it is unclear which method (if any) is offering the best performance. This is a little troubling because EP is constructing the representations using both node features and graph structure while the baselines only consider graph structure itself, and this raises the question of whether EP is adequately capturing the node features when learning the node representation.
Overall, I believe that the paper is significant enough to warrant publication at NIPS.
Some questions and suggestions for improvement for the authors:
- It seems like this method could be implemented in a distributed asynchronous manner like standard belief propagation. If so, the scalability properties are nice - linear computational complexity and a low memory footprint due to the sparse feature input and low-dimensional dense representations. This could be a good area to explore.
- It seems like there are really two representations being learned: a representation for a node's labels, and an overall representation for the node that takes the label representation of the node and the representations of the neighboring nodes into account. How necessary is it to learn a representation for the node labels as a part of the pipeline? Would it be possible to just use the raw sparse features (node labels)? If so, how would that affect performance?
- Is it possible to learn this representation in a semi-supervised manner?
- Learning curves would be nice. If the method is using fewer parameters, perhaps it's less inclined to overfit in a data-poor training regime?
Overall impression:
A good paper that proposes an interesting technique that gives solid (although not spectacular) results. An interesting idea for others to build on. Accept. |
nips_2017_3235 | Gradient Episodic Memory for Continual Learning
One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge. To better understand this issue, we study the problem of continual learning, where the model observes, once and one by one, examples concerning a sequence of tasks. First, we propose a set of metrics to evaluate models learning over a continuum of data. These metrics characterize models not only by their test accuracy, but also in terms of their ability to transfer knowledge across tasks. Second, we propose a model for continual learning, called Gradient Episodic Memory (GEM) that alleviates forgetting, while allowing beneficial transfer of knowledge to previous tasks. Our experiments on variants of the MNIST and CIFAR-100 datasets demonstrate the strong performance of GEM when compared to the state-of-the-art. | The authors of the manuscript consider the continuum learning setting, where the learner observes a stream of data points from training, which are ordered according to the tasks they belong to, i.e. the learner encounters any data from the next task only after it has observed all the training data for the current one. The authors propose a set of three metrics for evaluation performance of learning algorithms in this setting, which reflect their ability to transfer information to new tasks and not forget information about the earlier tasks. Could the authors, please, comment on the difference between continuum and lifelong learning (the corresponding sentence in line 254 seems incomplete)?
The authors also propose a learning method, termed Gradient of Episodic Memory (GEM). The idea of the method is to keep a set of examples from every observed task and make sure that at each update stage the loss on the observed tasks does not increase. It seems like there are some typos in eq. (6), because in its current form I don’t see how it captures this idea.
GEM is evaluated on a set of real-world datasets against a set of state-of-the-art baselines. I have a few questions regarding this evaluation:
1. comparison to icarl: if I understand correctly, icarl solves a multi-class classification problem with new classes being added over time, i.e. by the end of training on CIFAR-100 it would be able to solve a full 100-classes problem, while GEM would solve only any of 20 5-classes problems. How was the evaluation of icarl performed in this experiment? Was it ever given a hint which of 5 classes to look at?
2. it also seems from table 1 that the results in figure 1 for GEM and icarl were obtained for different memory sizes. If this is true, could the authors comment on why?
3. on MNIST rotations task multi-task method has less negative transfer than GEM, but its accuracy is lower. Could the authors, please, comment on why? Or these differences are non-significant?
4. It would be very interesting to see performance of baseline (multi-task for MNIST and/or multi-class for CIFAR) that gets all the data in a batch, to evaluate the gap that remains there between continuum learning methods, like GEM, and standard batch learning.
5. were the architectures, used for baseline methods (multi-task and icarl), the same as for GEM? |
nips_2017_3506 | Efficient and Flexible Inference for Stochastic Systems
Many real world dynamical systems are described by stochastic differential equations. Thus parameter inference is a challenging and important problem in many disciplines. We provide a grid free and flexible algorithm offering parameter and state inference for stochastic systems and compare our approch based on variational approximations to state of the art methods showing significant advantages both in runtime and accuracy. | ***Update following reviewer discussion and author feedback***
I am happy to revise my score for the paper provided the authors add some discussion of the number of OU processes used in the simulations, the \delta term in the RODE system and add the volatility to the Lorenz96 model (in addition to the other changes to the text recommended by myself and the other reviewers).
**************************************************************
The authors propose a method for combined state and parameter estimation for stochastic differential equation (SDE) models. The SDE is first transformed into a random ordinary differential equation. Several solution paths are then simulated to generate a large number of ordinary differential equations, and each of these is then solved using an EM algorithm type approach that was introduced in an earlier paper. The method is tested on two systems, the Lorenz96 and Lorenz63 models, and compared to a competitor method showing that the new approach can be faster and more accurate.
There are some interesting ideas in the paper but I can’t accept it for publication in its current form. The general approach seems reasonable, but there are some details of it that the authors don’t really mention that I think need to be explored. There are also lots of things that I found unclear in the manuscript and I think these need to be fixed. Since there is no major revision option for the conference I must therefore recommend rejection. I think if the authors revise the paper accordingly it could be accepted at a similar conference in the future.
Detailed comments:
I think is section 3 you need to spend more time discussing two things:
How many OU processes you should be simulating. You don’t seem to discuss it but it must be crucial to the performance of the approach in practice.
Why the additional \delta term is added to each RODE system, and how its variance should be chosen.
The main reason for these suggestions is that the approach itself borrows innovations from other sources (e.g. Gorbich et al. 2017). Because of this, the specifics of how these tools are applied in the SDE context is to my mind the main contribution of the present paper. If you don’t discuss these things then it isn’t clear how useful the paper is.
The first line states ‘A dynamical system is represented by a set of K stochastic differential equations…’ This is not true in general, many dynamical systems cannot be represented by stochastic differential equation models (e.g. when the driving noise is not Brownian).
line 56 why is equation (2) a scalar RODE? I would have thought that it was a d-dimensional system of RODEs. Or does scalar not mean what I think it means here?
In (2) the vector field is described as f(x(t),w(t)), whereas in (2) it is written f(x(t),\eta(w)). Shouldn’t it instead be f(x(t),\eta(t)) in (2) and f(x(t),\eta(t,w)) in (3), since \eta(t) is a r.v. and \eta(t,w) will be the fixed outcome for w \in \Omega?
Line 60. Here you introduce Example 1, but then fail to discuss it for the next paragraph. Then you introduce Example 2 which is essentially the second part of example 1. I think it would be better if you combined example 1 and 2, since the purpose of the example is to show how a RODE can be re-written as an SDE when the driving noise is Brownian. And also put it where example 2 currently is (i.e. below the paragraph of text).
Line 87. ‘Stationary' for the OU process can either mean it is started at equilibrium or that it is positive recurrent (i.e. it will be ergodic). Which of these do you mean here? From the definition I think it should be the latter since the SDE could be conditioned to start at any point, but this could certainly cause some confusion I think.
Line 89. Why an OU process? Can the authors offer any intuition here, as it seems to come out of nowhere and I think the reader would be very interested to understand the origins.
Line 167. Is the Lorenz96 system therefore an SDE with unit volatility? I think it would be good to clarify this for non-experts.
Typos
line 24 diffusions processes -> diffusion processes
line 44 the fact the we -> the fact that we
line 48 ‘both frameworks are highly related’ doesn’t make sense. If two things are related then the statement clearly applies to both of them.
line 54 is a R^m valued -> be an R^m valued
line 99 an computationally efficient -> a computationally efficient
line 113 an product of experts -> a product of experts
line 119 an lower bound -> a lower bound
line 174 in pur approach -> in our approach (I think?)
line 190 robuts -> robust
line 198 random ordinary differential equation -> random ordinary differential equations |
nips_2017_3507 | When Cyclic Coordinate Descent Outperforms Randomized Coordinate Descent
The coordinate descent (CD) method is a classical optimization algorithm that has seen a revival of interest because of its competitive performance in machine learning applications. A number of recent papers provided convergence rate estimates for their deterministic (cyclic) and randomized variants that differ in the selection of update coordinates. These estimates suggest randomized coordinate descent (RCD) performs better than cyclic coordinate descent (CCD), although numerical experiments do not provide clear justification for this comparison. In this paper, we provide examples and more generally problem classes for which CCD (or CD with any deterministic order) is faster than RCD in terms of asymptotic worst-case convergence. Furthermore, we provide lower and upper bounds on the amount of improvement on the rate of CCD relative to RCD, which depends on the deterministic order used. We also provide a characterization of the best deterministic order (that leads to the maximum improvement in convergence rate) in terms of the combinatorial properties of the Hessian matrix of the objective function. | In this paper, the authors analyze cyclic and randomized coordinate descent and show that despite the common assumption in the literature, cyclic CD can actually be much faster than randomized CD. The authors show that this is true for quadratic objectives when A is of a certain type (e.g., symmetric positive definite with diagonal entries of 1, irreducible M-matrix, A is consistently ordered, 2-cyclic matrix).
Comments:
- There are more than 2 variants of CD selection rules: greedy selection is a very valid selection strategy for structured ML problems (see "Coordinate Descent Converges Faster with the Gauss-Southwell Rule than Random Selection, ICML 2015 by Nutini et. al.)
- The authors state that it is common perception that randomized CD always dominates cyclic CD. I don't agree with this statement. For example, if your matrix is diagonal, cyclic CD will clearly do better than randomized CD.
- The authors make several assumptions on the matrices considered in their analysis (e.g., M-matrix, 2-cyclic). It is not obvious to me that these matrices are common in machine learning applications when solving a quadratic problem. I think the authors need to do a better job at convincing the reader that these matrices are important in ML applications, e.g., precision matrix estimation https://arxiv.org/pdf/1404.6640.pdf "Estimation of positive definite M-matrices and structure learning for attractive Gaussian Markov Random fields" by Slawski and Hein, 2014.
- The authors should be citing "Improved Iteration Complexity Bounds of Cyclic Block Coordinate Descent for Convex Problems" NIPS 2015 (Sun & Hong)
- I have several issues with the numerical results presented in this paper. The size of the problem n = 100 is small. As coordinate descent methods are primarily used in large-scale optimization, I am very curious why the authors selected such a small system to test. Also, it seems that because of the structure of the matrices considered, there is equal progress to be made regardless of the coordinate selected -- thus, it seems obvious that cyclic would work better than random, as random would suffer from re-selection of coordinates, while cyclic ensures updating every coordinate in each epoch. In other words, the randomness of random selection would be unhelpful (especially using a uniform distribution). Can the authors please justify these decisions.
Overall, I'm not convinced that the analysis in this paper is that informative. It seems that the assumptions on the matrix naturally lend to using cyclic selection as there is equal progress to be made by updating any coordinate. At least that is what the numerical results are showing. I think the authors need to further convince that this problem setting is important and realistic to ML application and that their numerical results emphasize again the *realistic* benefits of cyclic selection.
============
POST REBUTTAL
============
I have read the author rebuttal and I thank the authors for their details comments. My comment regarding "equal progress to be made" was with respect to the structure in the matrix A and the authors have address this concern, pointing my attention to the reference in [20]. I think with the inclusion of the additional references mentioned by the authors in the rebuttal that support the applicability of the considered types of matrices in ML applications, I can confidently recommend this paper for acceptance. |
nips_2017_2300 | Variational Walkback: Learning a Transition Operator as a Stochastic Recurrent Net
We propose a novel method to directly learn a stochastic transition operator whose repeated application provides generated samples. Traditional undirected graphical models approach this problem indirectly by learning a Markov chain model whose stationary distribution obeys detailed balance with respect to a parameterized energy function. The energy function is then modified so the model and data distributions match, with no guarantee on the number of steps required for the Markov chain to converge. Moreover, the detailed balance condition is highly restrictive: energy based models corresponding to neural networks must have symmetric weights, unlike biological neural circuits. In contrast, we develop a method for directly learning arbitrarily parameterized transition operators capable of expressing nonequilibrium stationary distributions that violate detailed balance, thereby enabling us to learn more biologically plausible asymmetric neural networks and more general non-energy based dynamical systems. The proposed training objective, which we derive via principled variational methods, encourages the transition operator to "walk back" (prefer to revert its steps) in multi-step trajectories that start at datapoints, as quickly as possible back to the original data points. We present a series of experimental results illustrating the soundness of the proposed approach, Variational Walkback (VW), on the MNIST, CIFAR-10, SVHN and CelebA datasets, demonstrating superior samples compared to earlier attempts to learn a transition operator. We also show that although each rapid training trajectory is limited to a finite but variable number of steps, our transition operator continues to generate good samples well past the length of such trajectories, thereby demonstrating the match of its non-equilibrium stationary distribution to the data distribution. Source | This paper proposes an extension of the information destruction/reconstruction processes introduced by Sohl-Dickstein et al. (i.e. NET). Specifically, the authors propose learning an explicit model for the information destroying process, as opposed to defining it a priori as repeated noising and rescaling. The authors also propose tying the parameters of the forwards/backwards processes, with the motivation of more efficiently seeking to eliminate spurious modes (kind of like contrastive divergence in undirected, energy-based models).
The general ideas explored in this paper are interesting -- i.e. training models which generate data through stochastic iterative refinement, and which provide mechanisms for "wandering around" the data manifold. However, the proposed model is a fairly straightforward extension of NET and does not produce compelling quantitative or qualitative results. The general perspective taken in this paper, i.e. reinterpreting the NET approach in the context of variational inference, was presented earlier in Section 2.3 of "Data Generation as Sequential Decision Making" by Bachman et al. (NIPS 2015). If more ambitious extensions of the ideas latent in NET were explored in this paper, I would really like it. But, I don't see that yet. And, e.g., the CIFAR10 results are roughly what one gets when simply modelling the data using a full-rank Gaussian distribution. This would make an interesting workshop paper, but I don't think there's enough strong content yet to fill a NIPS paper. |
nips_2017_1803 | Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee
We introduce and analyze a new technique for model reduction for deep neural networks. While large networks are theoretically capable of learning arbitrarily complex models, overfitting and model redundancy negatively affects the prediction accuracy and model variance. Our Net-Trim algorithm prunes (sparsifies) a trained network layer-wise, removing connections at each layer by solving a convex optimization program. This program seeks a sparse set of weights at each layer that keeps the layer inputs and outputs consistent with the originally trained model. The algorithms and associated analysis are applicable to neural networks operating with the rectified linear unit (ReLU) as the nonlinear activation. We present both parallel and cascade versions of the algorithm. While the latter can achieve slightly simpler models with the same generalization performance, the former can be computed in a distributed manner. In both cases, Net-Trim significantly reduces the number of connections in the network, while also providing enough regularization to slightly reduce the generalization error. We also provide a mathematical analysis of the consistency between the initial network and the retrained model. To analyze the model sample complexity, we derive the general sufficient conditions for the recovery of a sparse transform matrix. For a single layer taking independent Gaussian random vectors of length N as inputs, we show that if the network response can be described using a maximum number of s non-zero weights per node, these weights can be learned from O(s log N ) samples. | The paper presents a technique to sparsify a deep ReLU neural network by solving a sequence of convex problems at each layer. The convex problem finds the sparsest set of weights that approximates the mapping from one layer to another. The ReLU nonlinearity is dealt with by treating the activated and deactivated cases as two separate sets of constraints in the optimization problem, thus, bringing convexity.
Two variants are provided, one that considers each layer separately, and another that carries the approximation in the next layer optimization problem to give the chance to the next layer to counterbalance this error. In both cases, the authors provide bounds on the approximation error after sparsification.
The paper is very clear and well-written. A number of theoretical results come in support to the proposed method.
The experiments section shows results for various networks (fully connected, and convolutional) on the MNIST data. The results are compared with a baseline which consists of removing weights with smallest magnitude from the network. The authors observe that their method works robustly while the baseline methods lands in some plateau with very low accuracy.
The baseline proposed here looks particularly weak. Setting a large number of weights to zero slashes the variance of the neuron preactivations, and the negative biases will tend to drive activations zero. Actually, in the HPTD paper, the authors obtain the best results by iteratively pruning the weights.
A pruning ratio of 80% is quite low. Other approaches such as SqueezeNet have achieved 98% parameter reduction on image data without significant drop of performance. |
nips_2017_1492 | Multitask Spectral Learning of Weighted Automata
We consider the problem of estimating multiple related functions computed by weighted automata (WFA). We first present a natural notion of relatedness between WFAs by considering to which extent several WFAs can share a common underlying representation. We then introduce the novel model of vector-valued WFA which conveniently helps us formalize this notion of relatedness. Finally, we propose a spectral learning algorithm for vector-valued WFAs to tackle the multitask learning problem. By jointly learning multiple tasks in the form of a vector-valued WFA, our algorithm enforces the discovery of a representation space shared between tasks. The benefits of the proposed multitask approach are theoretically motivated and showcased through experiments on both synthetic and real world datasets. | SUMMARY
The paper studies the problem of multitask learning of WFAs. It defines a notion of relatedness among tasks, and designs a new algorithm that can exploit such relatedness. Roughly speaking, the new algorithm stacks the Hankel matrices from different tasks together and perform an adapted version of spectral learning, resulting in a vv-WFA that can make vector-valued predictions with a unified state representation. A post-processing step that reduces the dimension of the WFA for each single task is also suggested to reduce noise. The algorithm is compared to the baseline of learning each task separately on both synthetic and real-world data.
COMMENTS
Overall this is a well written paper. However, I do have a concern in the experiment section: it is important to compare to the baseline where all data from different tasks are bagged together and treated as if they came from the same task. At least when all the tasks are the same, this should outperform everyone else as it makes full use of all the data. Of course, when the tasks are not related, such a practice may lead to asymptotic approximation error, but how large is this error practically? If this error is small on the datasets used in the experiment section, then such datasets are not interesting as any algorithm that does some kind of data aggregation would show improvement over single-task learning. If possible I would like to see some results (even if they are primary) on this comparison during rebuttal.
It would be good to also compare to Alg 1 without the projection step to see how much improvement this post-processing procedure brings.
The paper's presentation may be improved by discussing the application scenario of multi-task learning of WFAs. As a starter, one could consider natural language modeling tasks where we need to make predictions in different contexts (e.g., online chat vs newspaper articles) and have access to datasets in each of them. In this example, it is natural to expect that basic grammar is shared across the datasets and can be learned together. Of course, one can always aggregate all datasets into a big one and build a single model (which corresponds to the baseline I mentioned above), and the disadvantage is that the model cannot leverage the context information available at prediction phase.
Two additional suggestions:
- The current algorithm implicitly assumes equal weights among all the tasks. This should work well when the size of the datasets are roughly the same across tasks, but when they differ a lot I suspect that the algorithm could misbehave. In this case you might want to consider a weighted approach; see Kulesza et al, Low-Rank Spectral Learning with Weighted Loss Functions.
- Here is another reason for doing the projection step: consider the case when the m tasks are completely unrelated, and each of them requires n states. Single-task learning would need n*m^2 parameters for each character in the alphabet, while the multi-task learning uses a model of size (nm)^2. The projection step eliminates such redundancy.
MINOR ISSUE
Line 93: as far as I know, it is not required that empty string is included in prefixes or suffixes. (At least this is true in the PSR literature which I am more familiar with.) The author(s) might want to double check on this.
==============================
Thanks for the rebuttal and the additional results. No complaints! Will keep arguing for acceptance. |
nips_2017_913 | Nonbacktracking Bounds on the Influence in Independent Cascade Models
This paper develops upper and lower bounds on the influence measure in a network, more precisely, the expected number of nodes that a seed set can influence in the independent cascade model. In particular, our bounds exploit nonbacktracking walks, Fortuin-Kasteleyn-Ginibre type inequalities, and are computed by message passing algorithms. Nonbacktracking walks have recently allowed for headways in community detection, and this paper shows that their use can also impact the influence computation. Further, we provide parameterized versions of the bounds that control the trade-off between the efficiency and the accuracy. Finally, the tightness of the bounds is illustrated with simulations on various network models. | The paper develops upper and lower bounds, using correlation inequalities, on the value of the influence function (expected influenced set size) under the Independent Cascade model. The work is a clear improvement on the upper bounds of Lemonnier et al., NIPS 2014. The theoretical results are a solid contribution towards a bound-based analysis of influence computation and maximization, though the evaluation could be improved to better evaluate and communicate the impact of the results.
1) As future work building on this line of bounds, It would be interesting to see work connecting them to actual influence maximization, e.g. running a greedy IM procedure, and comparing the seed sets under greedy IM vs. seed sets returned by greedy maximization over the UB or LB measures. It would have been nice to see some dialogue with maximization as part of the work, but it would have been difficult to fit into the short format.
2) In the Experimental Results (Section 4), it would be preferable to see the evaluation decomposed over the two main sources of randomness being studied, the graph and the seed node {s]. It would be interesting to dig into properties of the seed nodes for which the bounds are tighter or looser (degree? Centrality? etc.) It would also be preferable to see an evaluation of larger seed sets, as part of the difficulty with computing influence is interactions between seed nodes.
3) An obvious omission is the study of real world graphs, which have higher degrees and contain many more triangles than any of the simulated graphs considered in the paper. The higher degrees make backtracking steps in random walks less frequent. The triangles and other cycles generally impede message passing algorithms. It’d be worth evaluating and commenting on these considerations.
4) The state of the art for fast IM computation with approximation guarantees is, I believe, (Tang-Xiao-Shi, SIGMOD 14). Worth including in the citations to IM papers. Furthermore, the use of the word “heuristic” on Line 20 is usually reserved for algorithms without formal guarantees. The methods in the citation block [10, 17, 3, 7, 20] almost all feature formal guarantees for IM in terms of worst-case approximation ratios (some in probabilistic senses). To call all these papers “heuristic” feels overly dismissive. As a related comment on the literature review, on Line 32 "A few exceptions include” makes it sound like there is going to be more than one exception, but only one is given. Fine to rewrite to only expect on exception, better to give more cites. |
nips_2017_582 | Contrastive Learning for Image Captioning
Image captioning, a popular topic in computer vision, has achieved substantial progress in recent years. However, the distinctiveness of natural descriptions is often overlooked in previous work. It is closely related to the quality of captions, as distinctive captions are more likely to describe images with their unique aspects. In this work, we propose a new learning method, Contrastive Learning (CL), for image captioning. Specifically, via two constraints formulated on top of a reference model, the proposed method can encourage distinctiveness, while maintaining the overall quality of the generated captions. We tested our method on two challenging datasets, where it improves the baseline model by significant margins. We also showed in our studies that the proposed method is generic and can be used for models with various structures. | The paper proposed a contrastive learning approach for image captioning models. Typical image captioning models utilize log-likelihood criteria for learning, which tends to result in preferring a safer generation that lacks specific and distinct concept in an image. The paper proposes to introduce contrastive learning objective, where the objective function is based on density ratio to the reference, without altering the captioning models. The paper evaluates multiple models in MSCOCO and InstaPIC datasets, and demonstrates the effectiveness as well as conducts ablation studies.
The paper is well-written and has strength in the following points.
* Proposing a generalizable learning method
* Convincing empirical study
The paper could be improved on the following respect.
* Results might look insignificant depending on how to interpret
* Insufficient discussion on distinctiveness vs. human-like description
For the purpose of introducing distinctiveness in the image captioning problem, the paper considers altering the learning approach using existing models. The paper takes contrastive learning ideas from NCE [5], and derives an objective function eq (10). By focusing on the contrastive component in the objective, the paper solves the problem of learning under MLE scheme that results in a generic description. Although the proposed objective is similar to NCE, the approach is general and can benefit in other problem domains. This is certainly a technical contribution.
In addition, the paper conducts a thorough empirical study to show the effectiveness as well as to generalization across base models and datasets. Although the result is not necessarily the best all the time depending on the evaluation scenario, I would point out the proposed approach is independent of the base model yet consistently improving the performance over the MLE baseline.
One thing I would point out is that the paper could discuss more on the nature of distinctiveness in image captions. As discussed in the introduction, distinctiveness is certainly one component overlooked in caption generation. However, the paper’s view on distinctiveness is something that can be resolvable by algorithms. I would like to argue that the nature of caption data can induce generic description due to the vagueness of image content [Jas 2015]. Even if an image can be described by distinctive phrases, people can describe the content by comfortable words that are not always distinctive [Ordonez 2013]. In this respect, I would say the choice of the dataset may or may not be appropriate for evaluating distinctive phrases. The paper can add more on data statistics and human behavior on image description.
* Jas, Mainak, and Devi Parikh. "Image specificity." CVPR 2015.
* Ordonez, Vicente, et al. "From large scale image categorization to entry-level categories." ICCV 2013.
Another concern is that image captioning is extensively studied in the past and unfortunately the paper might not be able to impact a lot in the community.
In overall, the paper is well written and presenting convincing results for the concerned problem. Some people might not like the small improvements in the performance, but I believe the result also indicates a good generalization ability. I think the paper is above the borderline. |
nips_2017_1508 | REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models
Learning in models with discrete latent variables is challenging due to high variance gradient estimators. Generally, approaches have relied on control variates to reduce the variance of the REINFORCE estimator. Recent work (Jang et al., 2016;Maddison et al., 2016) has taken a different approach, introducing a continuous relaxation of discrete variables to produce low-variance, but biased, gradient estimates. In this work, we combine the two approaches through a novel control variate that produces low-variance, unbiased gradient estimates. Then, we introduce a modification to the continuous relaxation and show that the tightness of the relaxation can be adapted online, removing it as a hyperparameter. We show state-of-the-art variance reduction on several benchmark generative modeling tasks, generally leading to faster convergence to a better final log-likelihood.
to significantly improve its effectiveness. We call this the REBAR gradient estimator, because it combines REINFORCE gradients with gradients of the Concrete relaxation. Next, we show that a modification to the Concrete relaxation connects REBAR to MuProp in the high temperature limit. Finally, because REBAR is unbiased for all temperatures, we show that the temperature can be optimized online to reduce variance further and relieve the burden of setting an additional hyperparameter.
In our experiments, we illustrate the potential problems inherent with biased gradient estimators on a toy problem. Then, we use REBAR to train generative sigmoid belief networks (SBNs) on the MNIST and Omniglot datasets and to train conditional generative models on MNIST. Across tasks, we show that REBAR has state-of-the-art variance reduction which translates to faster convergence and better final log-likelihoods. Although we focus on binary variables for simplicity, this work is equally applicable to categorical variables (Appendix C). | This paper introduces a control variate technique to reduce the variance of the REINFORCE gradient estimator for discrete latent variables. The method, called REBAR, is inspired by the Gumble-softmax/Concrete relaxations; however, in contrast to those, REBAR provides an unbiased gradient estimator. The paper shows a connection between REBAR and MuProp. The variance of the REBAR estimator is compared to state-of-the-art methods on sigmoid belief networks. The paper focuses on binary discrete latent variables.
Overall, I found this is an interesting paper that addresses a relevant problem; namely, non-expensive low-variance gradient estimators for discrete latent variables. The writing quality is good, although there are some parts that weren't clear to me (see comments below). The connections to MuProp and Gumble-softmax/Concrete are clear.
Please find below a list with detailed comments and concerns:
- The paper states several times that p(z)=p(z|b)p(b). This is confusing, as p(z|b)p(b) should be the joint p(z,b). I think that the point is that the joint can be alternatively written as p(z,b)=p(b|z)p(z), where the first term is an indicator function, which takes value 1 if b=H(z) and zero otherwise, and that it motivates dropping this term. But being rigorous, the indicator function should be kept. So p(b|z)p(z)=p(z|b)p(b), and when b=H(z), then p(z)=p(z|b)p(b). I don't think the derivations in the paper are wrong, but this issue was confusing to me and should be clarified.
- It is not clear to me how the key equation in the paper was obtained (the unnumbered equation between lines 98-99). The paper just reads "Putting the terms together, we arrive at", but this wasn't clear to me. The paper would definitely be improved by adding these details in the main text or in the Appendix.
- In the same equation, the expectation w.r.t. p(u,v), where u, v ~ Uniform(0,1) is also misleading, because they're not independent. As Appendix 7.5 reads, "a choice of u will determine a corresponding choice of v which produces the same z", i.e., p(u,v)=p(u)p(v|u), with p(v|u) being deterministic. In other words, I don't see how "using this pair (u,v) as the random numbers" is a choice (as the paper suggests) rather than a mathematical consequence of the procedure.
- The paper states that REBAR is equally applicable to the non-binary case (lines 44-45). I think the paper can be significantly improved by including the mathematical details in the Appendix to make this explicit.
- In the experimental section, all figures show "steps" (of the variational procedure) in the x-axis. This should be replaced with wall-clock time to get a sense of the computational complexity of each approach.
- In the experiments, it would be interesting to see a comparison with "Local expectation gradients" [Titsias & Lazaro-Gredilla].
- In the experimental section, do the authors have any intuition on why MuProp excels in the structured prediction task for Omniglot? (Figs. 6 and 7)
- The procedure to optimize temperature (Sec 3.2) seems related to the procedure to optimize temperature in "Overdispersed black-box VI" [Ruiz et al.]; if that's the case, the connection can be pointed out.
- The multilayer section (Sec 3.3) seems a Rao-Blackwellization procedure (see, e.g., [Ranganath et al.]). If that's the case, the authors should mention that.
- In line 108, there seems to be a missing 1/lambda term.
- In lines 143-144 and 193, there are missing parenthesis in the citations.
- Consider renaming Appendix 7 as Appendix 1. |
nips_2017_2208 | Random Permutation Online Isotonic Regression
We revisit isotonic regression on linear orders, the problem of fitting monotonic functions to best explain the data, in an online setting. It was previously shown that online isotonic regression is unlearnable in a fully adversarial model, which lead to its study in the fixed design model. Here, we instead develop the more practical random permutation model. We show that the regret is bounded above by the excess leave-one-out loss for which we develop efficient algorithms and matching lower bounds. We also analyze the class of simple and popular forward algorithms and recommend where to look for algorithms for online isotonic regression on partial orders. | I am not an expert in online learning, and did not read the proofs in the appendix. My
overall impression of the paper is positive, but I am not able to judge the importance of
the results or novelty of the analysis techniques. My somewhat indifferent score is more a
reflection of this than the quality of the paper.
Summary: The authors study isotonic regression in an online setting, where an adversary
initially chooses the dataset but the examples are shown in random order. Regret is
measured against the best isotonic function for the data set.
The main contribution of the paper seems to be in Section 4, i.e a class of "forward
algorithms" which encompass several well known methods, achieve sqrt(T) regret. The authors
also prove a bunch of complementary results such as lower bounds and results for different
settings and loss functions.
Can the authors provide additional motivation for studying the random permutation model?
I don't find the practical motivation in lines 46-47 particularly convincing. It might
help to elaborate on what is difficult/interesting for the learner this setting and what
makes the analysis different from existing work (e.g. [14]).
Section 3: How do Theorems 3.1 and 3.2 fit into the story of the paper? Are they simply
some complementary results or are they integral to the results in Section 4?
- The estimator in (3) doesn't seem computationally feasible. Footnote 2 states that the
result holds in expectation if you sample a single data and permutation but this is
likely to have high variance. Can you comment on how the variance decreases when you
sample multiple data points and multiple permutations?
Clarity: Despite my relative inexperience in the field, I was able to follow most of the
details in the paper. That said, I felt the presentation was math driven and could be
toned down in certain areas, e.g. lines 149-157, 242-250
While there seem to a some gaps in the results, the authors have been fairly thorough in
exploring several avenues (e.g. sections 4.3-4.6). The paper makes several interesting
contributions that could be useful for this line of research.
----------------------------------------------------
Post rebuttal: I have read the authors' rebuttal and am convinced by their case for the setting. I have upgraded my score. |
nips_2017_249 | Best of Both Worlds: Transferring Knowledge from Discriminative Learning to a Generative Visual Dialog Model
We present a novel training framework for neural sequence models, particularly for grounded dialog generation. The standard training paradigm for these models is maximum likelihood estimation (MLE), or minimizing the cross-entropy of the human responses. Across a variety of domains, a recurring problem with MLE trained generative neural dialog models (G) is that they tend to produce 'safe' and generic responses ('I don't know', 'I can't tell'). In contrast, discriminative dialog models (D) that are trained to rank a list of candidate human responses outperform their generative counterparts; in terms of automatic metrics, diversity, and informativeness of the responses. However, D is not useful in practice since it can not be deployed to have real conversations with users. Our work aims to achieve the best of both worlds -the practical usefulness of G and the strong performance of D -via knowledge transfer from D to G. Our primary contribution is an end-to-end trainable generative visual dialog model, where G receives gradients from D as a perceptual (not adversarial) loss of the sequence sampled from G. We leverage the recently proposed Gumbel-Softmax (GS) approximation to the discrete distribution -specifically, a RNN augmented with a sequence of GS samplers, coupled with the straight-through gradient estimator to enable end-to-end differentiability. We also introduce a stronger encoder for visual dialog, and employ a self-attention mechanism for answer encoding along with a metric learning loss to aid D in better capturing semantic similarities in answer responses. Overall, our proposed model outperforms state-of-the-art on the VisDial dataset by a significant margin (2.67% on recall@10). | This paper describes an improved training procedure for visual dialogue models.
Rather than maximizing the likelihood of a collection of training captions, this
approach first trains a discriminator model to rank captions in a given context
by embedding them in a common space, then uses scores from this discriminator as
an extra component in a loss function for a generative sequence prediction
model. This improved trainin procedure produces modest improvements on an
established visual dialogue benchmark over both previous generative approaches
as well as adversarial training.
I think this is a pretty good paper, though there are a few places in which
the presentation could be improved.
SPECIFIC COMMENTS
The introduction claims that the discriminator "has access to more information
than the generator". I'm not clear on what this means. It is
true that in a single update step, the discriminator here is trained to
distinguish the ground truth answer from a collection of answers, while in the
standard conditional GAN setting the discriminator compares only one sample at a
time to the ground truth. But over the course of training a normal GAN
discriminator will also get access to lots of samples from the training
distribution. More generally, there's an equivalence between the problem of
learning to assign the right probability to a sample from a distribution and
learning to classify whether a sample came from the target distribution or a
noise distribution (see e.g. the NCE literature). Based on this section, I
expected the discriminator to be able to learn features on interactions between
multiple candidates, and was confused when it wound up assigning an embedding to
each candidate independently.
Related work: given the offered interpretation of D as a "perceptual loss", I
was surprised not to see any mention in the related work section of similar loss
functions used image generation tasks (e.g. Johnson et al). I was not able to
find anything in a quick lit search of similar losses being used for natural
language processing tasks, which is one of the things that's exciting about this
paper, but you should still discuss similar work in other areas.
@137 it's standard form
The explanation offered for the objective function at 177 is a little strange.
This is a logistic regression loss which encourages the model to assign highest
probability to the ground-truth caption, and I think it will be most
understandable to readers if presented in that familiar form. In particular, it
does not have an interpretation as a metric-learning procedure unless the norms
of the learned representations are fixed (which doesn't seem to be happening
here), and is not a margin loss.
I don't understand what "assigned on-the-fly" @186 means. Just that
you're reusing negative examples across multiple examples in the same batch?
@284 modifications to encoder alone
Table captions should explain that MRR -> "mean reciprocal rank" and especially
"mean" -> "mean rank". |
nips_2017_870 | DPSCREEN: Dynamic Personalized Screening
Screening is important for the diagnosis and treatment of a wide variety of diseases. A good screening policy should be personalized to the features of the patient and to the dynamic history of the patient (including the history of screening). The growth of electronic health records data has led to the development of many models to predict the onset and progression of different diseases. However, there has been limited work to address the personalized screening for these different diseases. In this work, we develop the first framework to construct screening policies for a large class of disease models. The disease is modeled as a finite state stochastic process with an absorbing disease state. The patient observes an external information process (for instance, self-examinations, discovering comorbidities, etc.) which can trigger the patient to arrive at the clinician earlier than scheduled screenings. The clinician carries out the tests; based on the test results and the external information it schedules the next arrival. Computing the exactly optimal screening policy that balances the delay in the detection against the frequency of screenings is computationally intractable; this paper provides a computationally tractable construction of an approximately optimal policy. As an illustration, we make use of a large breast cancer data set. The constructed policy screens patients more or less often according to their initial risk -it is personalized to the features of the patient -and according to the results of previous screens -it is personalized to the history of the patient. In comparison with existing clinical policies, the constructed policy leads to large reductions (28-68%) in the number of screens performed while achieving the same expected delays in disease detection. | The objective of the paper is to find the best policy for patient screening given the pertinent information. To provide the policy, a disease should be modeled as a finite state stochastic process. The policy is trying to minimize screening costs and delays. The authors propose an approximate solution that is computationally efficient. The experiments on simulated data related to breast cancer indicate that the proposed algorithm could reduce delays while keeping the same cost when compared to a trivial baseline
Positives:
+ personalized screening is an important topic of research
+ the proposed algorithm is reasonable and well-developed
+ the results are promising
+ the paper is well written
Negatives:
- the proposed algorithm is purely an academic exercise. It assumes that the disease model of a given patient is known, which could be never assumed in practice. The main obstacle in personalized screening is not coming up with a policy when the model is known, but inferring the model from data.
- the methodological novelty is not large
- the experiments only compare the proposed algorithm to a baseline doing annual screening. Given the previous point about the disease model, an important question that should be studies experimentally is the robustness of the proposed method to inaccuracies in the disease model. In other words, how sensitive is the policy to uncertainties in the disease model.
Overall, the topic is interesting and the proposed algorithm is reasonable. However, the methodological novelty is not large and the paper would need to do a better work to convince readers that this work is practically relevant. |
nips_2017_2096 | Variational Laws of Visual Attention for Dynamic Scenes
Computational models of visual attention are at the crossroad of disciplines like cognitive science, computational neuroscience, and computer vision. This paper proposes a model of attentional scanpath that is based on the principle that there are foundational laws that drive the emergence of visual attention. We devise variational laws of the eye-movement that rely on a generalized view of the Least Action Principle in physics. The potential energy captures details as well as peripheral visual features, while the kinetic energy corresponds with the classic interpretation in analytic mechanics. In addition, the Lagrangian contains a brightness invariance term, which characterizes significantly the scanpath trajectories. We obtain differential equations of visual attention as the stationary point of the generalized action, and we propose an algorithm to estimate the model parameters. Finally, we report experimental results to validate the model in tasks of saliency detection. | Variational Laws of Visual Attention for Dynamic Scenes
The authors investigate what locations in static and dynamic images tend to be attended to by humans. They derive a model by first defining three basic principles for visual attention (defined as an energy function to be minimized by the movements of the eye): (1) Eye movements are constrained by a harmonic oscillator at the borders of the image within a limited-sized retina. (2) a “curiosity driven principle” highlighting the regions with large changes in brightness in both a fine and blurred version of the image, and (3) brightness invariance, which increases as a function of changes in brightness. Using a cost function derived from these three functions, the authors derive differential equations for predicting the eye movements across static or dynamic images (depending on the starting location and initial velocity). The authors evaluate their technique quantitatively on data sets of static and dynamic scenes coupled with human eye movements. They demonstrate that their method performs comparable to the state-of-the-art.
Formal definitions of saliency and modeling eye movements are critical issues in computational vision and cognitive science. Psychologists have long been plagued by vague definitions of saliency, and the authors propose a novel and innovative model (as far as I am aware) that could aid the development of better understanding how what makes something salient and a formal model for eye movements (within the bottom-up tradition). Although it is not necessarily state-of-the-art on every metric for every data set, it performs well and provides a refreshingly different perspective on the problem.
Unfortunately, some of what I wrote above is based on conjecture as the paper is poorly written and hard to follow. I recommend the authors have others proofread the paper and expand on abbreviations (both within the equations and also those used in Section 3). I would recommend they move the 2 page Appendix to supplementary material and use those extra pages to define each variable and function used (even if it is a convention within your own field – the NIPS audience comes from many different disciplines and some will have trouble following the mathematics otherwise).
As a psychologist, I would have liked the authors to connect their work to some of the psychological literature on eye movements as optimal steps for gathering information. See for example, Najemnik, J., & Geisler, W. S. (2008). Eye movement statistics in humans are consistent with an optimal search strategy. Journal of Vision, 8(3), 1-14. There are other relevant articles (particularly from Geisler’s lab, but that should give the authors a pointer to follow into that literature). I would be interested to see a discussion of how their approach compares to their findings (e.g., are they restatements of similar models or provide independent information that could be integrated to produce a better model?) |
nips_2017_3039 | A framework for Multi-A(rmed)/B(andit) Testing with Online FDR Control
We propose an alternative framework to existing setups for controlling false alarms when multiple A/B tests are run over time. This setup arises in many practical applications, e.g. when pharmaceutical companies test new treatment options against control pills for different diseases, or when internet companies test their default webpages versus various alternatives over time. Our framework proposes to replace a sequence of A/B tests by a sequence of best-arm MAB instances, which can be continuously monitored by the data scientist. When interleaving the MAB tests with an online false discovery rate (FDR) algorithm, we can obtain the best of both worlds: low sample complexity and any time online FDR control. Our main contributions are: (i) to propose reasonable definitions of a null hypothesis for MAB instances; (ii) to demonstrate how one can derive an always-valid sequential p-value that allows continuous monitoring of each MAB test; and (iii) to show that using rejection thresholds of online-FDR algorithms as the confidence levels for the MAB algorithms results in both sample-optimality, high power and low FDR at any point in time. We run extensive simulations to verify our claims, and also report results on real data collected from the New Yorker Cartoon Caption contest. | The paper looks at continuous improvement using a sequence of A/B tests, and proposes instead to implement adaptive testing such as
multi-armed bandit problem while controlling the false discovery rate. This is an important problem discussed in statistical literature, but still unsolved.
The approach proposed in this paper seems to apparently solve the issues. This is a very interesting paper, that despite minor concerns listed below, could lead to a potentially new avenue of research.
Lines 24-37: There are well-known issues, and it would be desirable to add citations. Although authors clearly focus on CS/ML literature, there is also a relevant body of literature in biometrics, see e.g. survey by Villar, Bowden and Wason (Statistical Science 2015), the references therein and the more recent papers citing this survey.
Line 37: "testing multiple literature" -> "multiple testing literature"
Line 38-47: A similar concept exists in biometrics, called "platform trials" - please describe how your concept differs
Line 112: "and and" -> "and"
Line 115: please provide reference for and description of LUCB
Line 153: "samplesas" -> "samples as"
Line 260: "are ran" -> ?
Line 273: It is not clear what "truncation time" is and why it is introduced - it seems to have a huge effect on the results in Figure 2
Line 288-291: While this motivation is interesting, it seems to be mentioned at an inappropriate place in the paper - why not to do it in the introduction, alongside website management and clinical trials? |
nips_2017_2097 | Recursive Sampling for the Nyström Method
We give the first algorithm for kernel Nyström approximation that runs in linear time in the number of training points and is provably accurate for all kernel matrices, without dependence on regularity or incoherence conditions. The algorithm projects the kernel onto a set of s landmark points sampled by their ridge leverage scores, requiring just O(ns) kernel evaluations and O(ns 2 ) additional runtime. While leverage score sampling has long been known to give strong theoretical guarantees for Nyström approximation, by employing a fast recursive sampling scheme, our algorithm is the first to make the approach scalable. Empirically we show that it finds more accurate kernel approximations in less time than popular techniques such as classic Nyström approximation and the random Fourier features method. | The authors provide an algorithm which learns a provably accurate low-rank approximation to a PSD matrix in sublinear time. Specifically, it learns an approximation that has low additive error with high probability by sampling columns from the matrix according to a certain importance measure, then forming a Nystrom approximation using these kernels. The importance measure used is an estimate of the ridge leverage scores, which are expensive to compute exactly ( O(n^3) ). Their algorithm recursively estimates these leverage score by starting from a set of columns, using those to estimate the leverage scores, sampling a set of columns according to those probabilities, and repeating ... the authors show that when this process is done carefully, the leverage score estimates are accurate enough that they can be used to get almost as good an approximation as using the true ridge leverage scores. The cost of producing the final approximation is O(ns^2) computation time and O(ns) computations of entries in the PSD matrix.
This is the first algorithm which allows touching only a linear number of entries in a PSD matrix to obtain a provably good approximation --- previous methods with strong guarantees independent of the properties of the PSD matrix required forming all n^2 entries in the PSD matrix. The experimental results show that the method provides more accurate kernel approximations than Nystrom approximations formed using uniform column samples, and kernel approximations formed using Random Fourier Features. These latter two are currently the most widely used randomized low-rank kernel approximations in ML, as they are cheap to compute. A heuristic modification of the algorithm brings its runtimes down to close to that of RFFs and uniform sampled Nystrom approximations, but retains the high accuracy.
I see one technical problem in the paper: the proof of Lemma 5, in the display following line 528, the first inequality seems to use the fact that if W is a diagonal matrix and 0 <= W <= Id in the semidefinite sense, then for any PSD matrix A, we have WAW <= A. This is not true, as can be seen by taking A to be the rank one outer product of [1;-1], W to be diag(1, 1/2), and x = [1, 1]. Then x^TWAWx > x^tAx = 0. Please clarify the proof if the first inequality holds for some other reason, or otherwise establish the lemma.
Comments:
-In section 3.2, when you introduce effective dimension, you should also cite "Learning Bounds for Kernel Regression Using Effective Data Dimensionality", Tong Zhang, Neural Computation, 2005. He calls the same expression the effective dimension of kernel methods
-Reference for the claim on lines 149-150? Or consider stating it as a simple lemma for future reference
-line 159, cite "On the Impact of Kernel Approximation on Learning Accuracy", Cortes et al., AISTATS, 2010 and "Efficient Non-Oblivious Randomized Reduction for Risk Minimization with Improved Excess Risk Guarantee", Xu et al., AAAI, 2017
-line 163, the reference to (13) should actually be to (5)
-on line 3 of Alg 2, I suggest changing the notation of the "equals" expression slightly so that it is consistent with the definition of \tilde{l_i^lambda} in Lemma 6
-the \leq sign in the display preceding line 221 is mistyped
-the experimental results with Gaussian kernels are very nice, but I think it would be useful to see how the method performs on a wider range of kernels.
-would like to see "Learning Kernels with Random Features" by Sinha and Duchi referenced and compared to experimentally
-would like to see "Learning Multidimensional Fourier Series With Tensor Trains", Wahls et al., GlobalSIP, 2014 cited as a related work (they optimize over frequencies to do regression versus 'random' features) |
nips_2017_1796 | Regret Minimization in MDPs with Options without Prior Knowledge
The option framework integrates temporal abstraction into the reinforcement learning model through the introduction of macro-actions (i.e., options). Recent works leveraged the mapping of Markov decision processes (MDPs) with options to semi-MDPs (SMDPs) and introduced SMDP-versions of exploration-exploitation algorithms (e.g., RMAX-SMDP and UCRL-SMDP) to analyze the impact of options on the learning performance. Nonetheless, the PAC-SMDP sample complexity of RMAX-SMDP can hardly be translated into equivalent PAC-MDP theoretical guarantees, while the regret analysis of UCRL-SMDP requires prior knowledge of the distributions of the cumulative reward and duration of each option, which are hardly available in practice. In this paper, we remove this limitation by combining the SMDP view together with the inner Markov structure of options into a novel algorithm whose regret performance matches UCRL-SMDP's up to an additive regret term. We show scenarios where this term is negligible and the advantage of temporal abstraction is preserved. We also report preliminary empirical results supporting the theoretical findings. | Overview:
The authors attempt to improve current regret estimation for HRL methods using options. In particular they attempt to do so in the absence of a distribution of cumulative rewards and of option durations, which is a requirement of previous methods (UCRL and SUCRL).
After assuming that options are well defined, the authors proceed to transform the inner MDP of options, represented by a transition matrix Po, into an equivalent irreducible Markov chain with matrix P'o. This is done by merging the terminal states to the initial state.
By doing so, and assuming that any state with a termination probability lower than one can be reached, the stationary distribution mu_o of the chain is obtainable; which in turn is utilized to estimate the optimistic reward gain. With respect to previous methods, this formulation is more robust to ill-defined estimates of the parameters, and better accounts for the correlation between cumulative reward and duration. This method is coined as FSUCRL.
The algorithm is complemented by estimating confidence intervals for the reward r, for the SMDP transition probabilities, and for the inner Markov Chain P'o. Two versions of this same algorithm, FUSCRL Lvl1 and Lvl2, are proposed. The first version requires directly computing an approximated distribution mu_o from the estimate of P'o. The second version nests two empirical value iterations to obtain the optimal bias.
The paper concludes with a theoretical analysis and a numerical application of FSUCLR. On a theoretical ground, FSUCRL Lvl2 is compared to SUCRL. The authors argue that the goodness of the bound on the regret predicted by FSUCRL Lvl2 compared to that of SUCRL depends various factors, including the length of the options and the accessibility of states within the option itself, and provide conditions where FSUCRL Lvl2 is likely to perform better than SUCRL.
As an indication, 4 algorithms, UCRL, SUCRL, FSUCRL Lvl1 and FSUCRL Lvl 2 are tested on a gridworld taken from ref. 9, where the maximum duration of the options is assumed known. Result confirm the theoretical expectations previously discussed: FSUCRL Lvl2 performs better of both SUCRL and FSUCRL Lvl1, partially due to the fact that the options' actions overlap.
Evaluation:
- Quality: The paper appears to be theoretically sound and the problem discussion is complete. The authors discuss in depth strength and weaknesses of their approach with respect to the previous SUCRL. The provided numerical simulation is not conclusive but supports the above considerations;
- Clarity: the paper could be clearer but is sufficiently clear. The authors provide an example and a theoretical discussion which help understanding the mathematical framework;
- Originality: the work seems to be sufficiently original with respect to its predecessor (SUCRL) and with respect to other published works in NIPS;
- Significance: the motivation of the paper is clear and relevant since it addresses a significant limitation of previous methods;
Other comments:
- Line 140: here the first column of Qo is replaced by vo to form P'o, so that the first state is not reachable anymore but from a terminating state. I assume that either Ass.1 (finite length of an option) or Ass. 2 (the starting state is a terminal state) clarify this choice. In the event this is the case, the authors should mention the connection between the two;
- Line 283: "four" -> "for";
- Line 284: "where" s-> "were"; |
nips_2017_341 | Mixture-Rank Matrix Approximation for Collaborative Filtering
Low-rank matrix approximation (LRMA) methods have achieved excellent accuracy among today's collaborative filtering (CF) methods. In existing LRMA methods, the rank of user/item feature matrices is typically fixed, i.e., the same rank is adopted to describe all users/items. However, our studies show that submatrices with different ranks could coexist in the same user-item rating matrix, so that approximations with fixed ranks cannot perfectly describe the internal structures of the rating matrix, therefore leading to inferior recommendation accuracy. In this paper, a mixture-rank matrix approximation (MRMA) method is proposed, in which user-item ratings can be characterized by a mixture of LRMA models with different ranks. Meanwhile, a learning algorithm capitalizing on iterated condition modes is proposed to tackle the non-convex optimization problem pertaining to MRMA. Experimental studies on MovieLens and Netflix datasets demonstrate that MRMA can outperform six state-of-the-art LRMA-based CF methods in terms of recommendation accuracy. | This is an excellent paper, proposing a sound idea of approximating a partially defined rating matrix with a combination of multiple low rank matrices of different ranks in order to learn well the head user/item pairs (users and items with lots of ratings) as well as the tail user/item pairs (users and items we few ratings). The idea is introduced clearly. The paper makes a good review of the state-of-the-art, and the experiment section is solid with very convincing results.
In reading the introduction, the reader could find controversial the statement in lines 25-27 about the correlation between the number of user-item ratings and the desired rank. One could imagine that a subgroup of users and items have a large number of ratings but in a consistent way, which can be explained with a low rank matrix. The idea is getting clear further in the paper, when explained in the light of overfitting and underfitting. The ambiguity could be avoided in this early section by adding a comment along the line of “seeking a low rank is a form of regularization”. |
nips_2017_3464 | A Meta-Learning Perspective on Cold-Start Recommendations for Items
Matrix factorization (MF) is one of the most popular techniques for product recommendation, but is known to suffer from serious cold-start problems. Item cold-start problems are particularly acute in settings such as Tweet recommendation where new items arrive continuously. In this paper, we present a meta-learning strategy to address item cold-start when new items arrive continuously. We propose two deep neural network architectures that implement our meta-learning strategy. The first architecture learns a linear classifier whose weights are determined by the item history while the second architecture learns a neural network whose biases are instead adjusted. We evaluate our techniques on the real-world problem of Tweet recommendation. On production data at Twitter, we demonstrate that our proposed techniques significantly beat the MF baseline and also outperform production models for Tweet recommendation. | This is an interesting and well-written paper but there are some parts that are not well explained, hence my recommendation. These aspects are not clear:
1. I am not sure about the "meta-learning" part. The recommendation task is simply formulated as a binary classification task (without using matrix factorization). The relation to meta-learning is not convincing to me.
2. "it becomes natural to take advantage of deep neural networks (the common approach in meta-learning)" - this is not a valid claim - deep learning is not the common approach for meta-learning; please see the papers by Brazdil and also the survey by Vilaltra & Drissi.
3. What is the input to the proposed 2 neural network architectures and what is its dimensionality? This should be clearly described.
4. I don't understand why in the first (liner) model the bias is constant and the weights are adapted and the opposite applies for the second (non-linear) model - the weights are fixed and the biases are adapted. This is an optimisation task and all parameters are important. Have you tried adaption all parameters?
5. The results are not very convincing - a small improvement, that may not be statistically significant.
6. Evaluation
-Have you sampled the same number of positive and negative examples for each user when creating the training and testing data?
-How was the evaluation done for items that were neither liked or disliked by a user?
-Why are the 3 baselines called "industrial"? Are they the typically used baselines?
-Is your method able to generate recommendations to all users or only for some? Is it able to recommend all new items? In other words, what is the coverage of your method?
-It will be useful to compare you method with a a pure content-based recommender. Is any of the beselines purely content-based?
7. Discuss the "black-box" aspect of using neural networks for making recommendations (lack of interpretability)
These issues need to be addressed and explained during the rebuttal. |
nips_2017_979 | Linear regression without correspondence
This article considers algorithmic and statistical aspects of linear regression when the correspondence between the covariates and the responses is unknown. First, a fully polynomial-time approximation scheme is given for the natural least squares optimization problem in any constant dimension. Next, in an average-case and noise-free setting where the responses exactly correspond to a linear function of i.i.d. draws from a standard multivariate normal distribution, an efficient algorithm based on lattice basis reduction is shown to exactly recover the unknown linear function in arbitrary dimension. Finally, lower bounds on the signal-to-noise ratio are established for approximate recovery of the unknown linear function by any estimator. | The article "Linear regression without correspondence" considers the problem of estimation in linear regression model in specific situation where the correspondence between the covariates and the responses is unknown. The authors propose the fully polynomial algorithms for the solution of least squares problem and also study the statistical lower bounds.
The main emphasis of the article is on the construction of fully polynomial algorithms for least squares problem in noisy and noiseless case, while previously only the algorithms with exponential complexity were known for the cases with dimension d > 1. For the noisy case the authors propose the algorithm which gives a solution of least squares problem with any prespecified accuracy. For noiseless case another algorithm is proposed, which gives the exact solution of the least squares problem. Finally, the authors prove the upper bound for the range of signal to noise ratio values for which the consistent parameter recovery is impossible.
In general, the proposed algorithms, though being not practical, help to make an important step in understanding of computational limits in the linear regression without correspondence. The statistical analysis is limited to lower bound while for the upper bound the authors refer to the paper by Pananjady et al. (2016). What puzzles me a lot is that provided lower bound for parameters recovery is d / log log n, while in Pananjady et al. (2016) the lower bound for permutation recovery is proved to be n^c for some c > 0. Moreover, in another paper Pananjady et al. (2017) the constant upper bound on prediction risk is proved. While all these results consider the different quantities to be analyzed, it is seems that the fundamental statistical limits in this problem is far from being well understood.
To sum up, I think that the paper presents a strong effort in the interesting research direction and the results are sound. However, I believe that the main impact of the paper is on computational side, while some additional statistical analysis is highly desired for this problem. Also I believe that such type of paper, which includes involved theoretical analysis as well as algorithms which are more technical than intuitive, is much more suitable for full journal publication than for short conference paper. |
nips_2017_1562 | An Empirical Study on The Properties of Random Bases for Kernel Methods
Kernel machines as well as neural networks possess universal function approximation properties. Nevertheless in practice their ways of choosing the appropriate function class differ. Specifically neural networks learn a representation by adapting their basis functions to the data and the task at hand, while kernel methods typically use a basis that is not adapted during training. In this work, we contrast random features of approximated kernel machines with learned features of neural networks. Our analysis reveals how these random and adaptive basis functions affect the quality of learning. Furthermore, we present basis adaptation schemes that allow for a more compact representation, while retaining the generalization properties of kernel machines. | Summary:
The authors provided an empirical study contrasting neural networks and kernel methods, with a focus on how random and adaptive schemes would make efficient use of features in order to improve quality of learning, at four levels of abstraction: data-agnostic random basis (baseline kernel machines with traditional random features), unsupervised data-adaptive basis for better approximation of kernel function, supervised data-label-adaptive basis by kernel target alignment, discriminatively adaptive basis (neural nets). The paper concluded with several suggestions and caveats for efficient use of random features in practice.
Comments:
- 1 -
Line 123, especially for sake of comparing UAB case where the underlying assumption is that using the true kernel function k in prediction yields the "best" performance so that UAB tries to approximate it, I would suggest testing in experiments a baseline model that utilizes the true kernel function k in prediction. Also this would suggest, for example in Fig. 1 at which point of the KAE curve the accuracy is sufficiently good (despite many theoretical results available).
- 2 -
The four datasets chosen in the paper certainly demonstrate proof for conclusions finally drawn. However, in order to support those conclusions to a definitively convincing extent, more datasets should be needed. For example, the performance scores in Tab. 1 do not seem to be too significantly different marginally for each task. And reasons for inconsistent behaviors across different tasks (CoverType vs others in Fig. 1 for instance, Line 222) are not well explained through empirical exploration.
- 3 -
It is not clear how big the difference can be between the four adaptive schemes in terms of training time, which can be crucial in practice. In addition, as the number of (approximate) features D increases, how does the training time increase accordingly in general for each scheme empirically? It would be interesting to also report this. |
nips_2017_1251 | Hiding Images in Plain Sight: Deep Steganography
Steganography is the practice of concealing a secret message within another, ordinary, message. Commonly, steganography is used to unobtrusively hide a small message within the noisy regions of a larger image. In this study, we attempt to place a full size color image within another image of the same size. Deep neural networks are simultaneously trained to create the hiding and revealing processes and are designed to specifically work as a pair. The system is trained on images drawn randomly from the ImageNet database, and works well on natural images from a wide variety of sources. Beyond demonstrating the successful application of deep learning to hiding images, we carefully examine how the result is achieved and explore extensions. Unlike many popular steganographic methods that encode the secret message within the least significant bits of the carrier image, our approach compresses and distributes the secret image's representation across all of the available bits. | The authors present a new steganography technique based on deep neural networks to simultaneously conduct hiding and revealing as a pair. The main idea is to combine two images of the same size together. The trained process aims to compress the information from the secret image into the least noticeable portions of the cover image and consists of three processes: a prep-Network for encoding features, the Hiding Network creates a container image, and a Reveal Network for decoding the transmitted container image. On the positive side, the proposed technique seems novel and clever, although it uses/modifies existing deep learning frameworks and therefore should be viewed as an application paper. The experiments are comprehensive and the results are convincing.
The technique, however, resembles greatly the image decomposition problem, e.g., for separating mixtures of the intrinsic and reflection layers in previous literatures. I'd hope the authors to clarify how the problems are different and if the proposed technique can be used to resolve the layer separation problem. More important, I wonder if existing approaches on layer separations can be used to directly decode your encrypted results.
I am also a bit concerned about the practicality of the proposed technique. First, since the container image will be transmitted and potentially intercepted. It appears that one can tell directly that the container image contains hidden information (the image contains ringing type visual artifacts). if that is the case, the approach is likely to undermine the effort. Second, if the cover and secret images appear similar, the technique may fail to robustly separate them. So a more interesting question is how to pick a suitable cover image for a specific secret image. But such discussions seem largely missing. Third, the requirement the cover and the secrete images should have the same size seems to be a major limitation. One would have to resize the images to make them match, it may increase file sizes, etc.
Overall, I think the proposed approach is interesting but I have concerns on the practicality and would like to see comparisons with state-of-the-art layer separation techniques. |
nips_2017_1037 | Submultiplicative Glivenko-Cantelli and Uniform Convergence of Revenues
In this work we derive a variant of the classic Glivenko-Cantelli Theorem, which asserts uniform convergence of the empirical Cumulative Distribution Function (CDF) to the CDF of the underlying distribution. Our variant allows for tighter convergence bounds for extreme values of the CDF. We apply our bound in the context of revenue learning, which is a well-studied problem in economics and algorithmic game theory. We derive sample-complexity bounds on the uniform convergence rate of the empirical revenues to the true revenues, assuming a bound on the kth moment of the valuations, for any (possibly fractional) k > 1. For uniform convergence in the limit, we give a complete characterization and a zero-one law: if the first moment of the valuations is finite, then uniform convergence almost surely occurs; conversely, if the first moment is infinite, then uniform convergence almost never occurs. | This paper provides a generalization of the Glivenko-Cantelli theorem, a "submultiplicative" compromise between additive and multiplicative errors. The two main results go hand in hand, the first dealing with existence of a sufficient index which guarantees submultiplicativity and the second providing an upper bound on such an index in order to provide ease in applying the results. It is clear that only submultiplicative results are possible due to a simple counterexample given. The proofs of the main results are technical, but mathematically clear.
Throughout the paper, the author(s) familiarity with previous work with generalizing the Glivenko-Cantelli theorem is verified, and the novelty of the work is demonstrated.
The author(s) apply these results to the problem of estimating revenue in an auction via empirical revenues. It is clear that understanding revenue estimation better will lend itself to revenue maximization. Moreover, investigating the connection between finite moments and uniform convergence in this example can offer a framework for exploring other estimators in a similar way.
While the paper is well-written, and I believe the results to be significant and even fundamental, I felt that the example given, while interesting, was quite specialized and detracted somewhat from generality of the paper. Even more detail in the discussion section on the potential impact to the learning theory community would be great.
Additional notes:
The definition of F_n(t) on lines 22 and 23 is hard to read.
The phrasing on lines 66-68 is awkward.
Between lines 104 and 105, why not continue with notation similar to that introduced in Theorem 1.3 for consistency? |
nips_2017_1250 | First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization
This paper studies empirical risk minimization (ERM) problems for large-scale datasets and incorporates the idea of adaptive sample size methods to improve the guaranteed convergence bounds for first-order stochastic and deterministic methods. In contrast to traditional methods that attempt to solve the ERM problem corresponding to the full dataset directly, adaptive sample size schemes start with a small number of samples and solve the corresponding ERM problem to its statistical accuracy. The sample size is then grown geometrically -e.g., scaling by a factor of two -and use the solution of the previous ERM as a warm start for the new ERM. Theoretical analyses show that the use of adaptive sample size methods reduces the overall computational cost of achieving the statistical accuracy of the whole dataset for a broad range of deterministic and stochastic first-order methods. The gains are specific to the choice of method. When particularized to, e.g., accelerated gradient descent and stochastic variance reduce gradient, the computational cost advantage is a logarithm of the number of training samples. Numerical experiments on various datasets confirm theoretical claims and showcase the gains of using the proposed adaptive sample size scheme. | The paper proposes an adaptive sample size strategy applied to first order methods to reduce the complexity of solving the ERM problems. By definition, the ERM problem respect to a given dataset D is to minimize the average loss over it. Instead of handling the full dataset all at once, the proposed algorithm starts with a small subset of D and first minimize the ERM problem respect to this subset. After reaching the desired statistical accuracy on this small problem, it doubles the size of the subset and update the problem with the new subset. The strategy repeats such procedure until the full dataset is included. The paper shows an improvement both in theoretical analysis and in experiments.
The paper is very clear, the theoretical proof is correct and well presented. I find the paper very interesting and I like the idea of taking into account the statistical problem behind the ERM problem. A question I have is that the algorithm requires to predefine the statistical accuracy and uses it explicitly as a parameter in the construction of the algorithm, is there any way to adapt it by removing its dependency in the algorithm? Because it is demonstrated in the experiments that the choice of such accuracy do influence the performance. (In MNIST, the performance of taking 1/n is better than 1/sqrt{n}) The current algorithm will stop once the predefine accuracy is attained which is eventually improvable by varying it.
Besides, I am a bit concerned about the novelty of the paper. As mentioned by the author, there is a big overlap with the reference [12]. The main strategy, including the regularized subproblem and the proposition 1, is the same as in [12]. The only difference is to replace the Newton's method by first order methods and provide the analysis of the inner loop complexity.
Overall, I find the idea interesting but the contribution seems to be limited, therefore I vote for a weakly accept. |
nips_2017_615 | Decoupling "when to update" from "how to update"
Deep learning requires data. A useful approach to obtain data is to be creative and mine data from various sources, that were created for different purposes. Unfortunately, this approach often leads to noisy labels. In this paper, we propose a meta algorithm for tackling the noisy labels problem. The key idea is to decouple "when to update" from "how to update". We demonstrate the effectiveness of our algorithm by mining data for gender classification by combining the Labeled Faces in the Wild (LFW) face recognition dataset with a textual genderizing service, which leads to a noisy dataset. While our approach is very simple to implement, it leads to state-of-the-art results. We analyze some convergence properties of the proposed algorithm. | Summary
The paper proposes a meta algorithm for training any binary classifier in a manner that is robust to label noise. A model trained with noisy labels will overfit them trained for too long. Instead, one can train two models at the same time, initialized at random, and update by disagreement: the updates are performed only when the two models' prediction differ, a sign that they are still learning from the genuine signal in the data (not the noise); and instead, defensively, if the models agree on their predictions and the respective ground truth label is different, they should not perform an update, because this is a sign of potential label noise. A key element is the random initialization of the models, since the assumption is that the two should not give the same prediction unless they are close to converge; this fits well with deep neural networks, the target of this work.
The paper provides a proof of convergence in the case of linear models (updated with perceptron algorithm and in the realizable case) and a proof that the optimal model cannot be reach in general, unless we resort to restrictive distributional assumptions (this is nice since it also shows a theoretical limitation of the meta-algorithm). The method works well in practice in avoiding to overfit labels noise, as shown by experiments with deep neural networks applied to gender classification on the LFW dataset, and additional tests with linear models and on MNIST.
Comments
The paper is very well written: the method is introduced intuitively, posed in the context of the relevant literature, proven to be sound in a simple theoretical setting and shown to be effective on a simple experimental set up, in realistic scenario with noise. Additionally, this work stands out from the large set of papers on the topic because of its simplicity and the potential of use in conjunction with others methods.
Proofs are easy to follow and seem flawless. Experimental results are promising on simple scenarios but will need future work to investigate the robustness and effectiveness on at scale and in multi-class --- although I don't consider this a major issue because the paper is well balanced between theory and practice.
Also, do you find any interesting relation with boosting algorithms? In particular I am referring to "The strenght of weak learnability" by R. Schapire, that introduced a first form (although impractical) of boosting of weak classifier. The algorithm presented back then uses a form of "update by disagreement" for boosting, essentially training a third model only on data points classified the same way by the former two models.
Minors
25 -> better
107 -> probabilistic
183 -> algorithm |
nips_2017_532 | Lower bounds on the robustness to adversarial perturbations
The input-output mappings learned by state-of-the-art neural networks are significantly discontinuous. It is possible to cause a neural network used for image recognition to misclassify its input by applying very specific, hardly perceptible perturbations to the input, called adversarial perturbations. Many hypotheses have been proposed to explain the existence of these peculiar samples as well as several methods to mitigate them, but a proven explanation remains elusive. In this work, we take steps towards a formal characterization of adversarial perturbations by deriving lower bounds on the magnitudes of perturbations necessary to change the classification of neural networks. The proposed bounds can be computed efficiently, requiring time at most linear in the number of parameters and hyperparameters of the model for any given sample. This makes them suitable for use in model selection, when one wishes to find out which of several proposed classifiers is most robust to adversarial perturbations. They may also be used as a basis for developing techniques to increase the robustness of classifiers, since they enjoy the theoretical guarantee that no adversarial perturbation could possibly be any smaller than the quantities provided by the bounds. We experimentally verify the bounds on the MNIST and CIFAR-10 data sets and find no violations. Additionally, the experimental results suggest that very small adversarial perturbations may occur with non-zero probability on natural samples. | This paper introduces lower bounds on the minimum adversarial perturbations that can be efficiently computed through layer-wise composition.
The idea and the approach is timely, and addresses one of the most pressing problems in Machine Learning. That being said, my main criticism is that the bounds are too loose: the minimum adversarials found through FGSM are several orders of magnitude larger then the estimated lower bounds. That might have two reasons: for one the lower bounds per layer might not be tight enough, or the adversarials found with FGSM are simply to large and not a good approximation for the real minimum adversarials perturbation. To test the second point, I’d encourage the authors to use better adversarial attacks like LBFGS or DeepFool (e.g. using the recently released Python package Foolbox which implements many different adversarial attacks). Also, the histograms in Figure 2&3 are difficult to compare. A histogram with the per-sample ratio between adversarial perturbation and lower bound would be more enlightening (especially once the bounds get tighter). |
nips_2017_2410 | Scalable Model Selection for Belief Networks
We propose a scalable algorithm for model selection in sigmoid belief networks (SBNs), based on the factorized asymptotic Bayesian (FAB) framework. We derive the corresponding generalized factorized information criterion (gFIC) for the SBN, which is proven to be statistically consistent with the marginal log-likelihood. To capture the dependencies within hidden variables in SBNs, a recognition network is employed to model the variational distribution. The resulting algorithm, which we call FABIA, can simultaneously execute both model selection and inference by maximizing the lower bound of gFIC. On both synthetic and real data, our experiments suggest that FABIA, when compared to state-of-the-art algorithms for learning SBNs, (i) produces a more concise model, thus enabling faster testing; (ii) improves predictive performance; (iii) accelerates convergence; and (iv) prevents overfitting. | The authors propose a variational Bayes method for model selection in sigmoid belief networks. The method can eliminate nodes in the hidden layers of multilayer networks. The derivation of the criterion appears technically solid and a fair amount of experimental support for the good performance of the method is provided.
I have to say I'm no expert in this area, and I hope other reviewers can comment on the level of novelty.
detailed comments:
- p. 1, l. 21: "Model selection is here the task of selecting the number of layers [...]": I got the impression that the proposed algorithm only eliminates individual nodes, not entire layers. Please clarify.
- p. 2, ll. 76--77: Are there really no weights on the first layer? You forgot to define b.
- p. 5, ll. 173--175: If nodes with expected proportion of 1's very small can be eliminated, why doesn't the same hold for nodes with expected proportion of 0's equally small? "When the expectation is not exact, such as in the top layers, [...]": Please clarify. How can we tell, when the expectation is not exact? And do you really mean 'exact' or just 'accurate', etc. What is the precise rule to decide this.
- p. 6, ll. 226--227: Please explain what 10-5^2 and 25-15 means (is there a typo, or why would you write 5^2 instead of simply 25?).
- p. 6, l. 239: "Our performance metric is the variational lower bound of the test log-likelihood." Why use a variational bound as a performance metric? Sounds like variational techniques are an approach to derive a criterion, but using them also to measure performance sounds questionable. Should the evaluation be based on a score that is independent of the chosen approach and the made assumptions?
- references: please provide proper bibliographic information, not just "JMLR" or "NIPS". Add volume, page numbers, etc. |
nips_2017_2880 | Do Deep Neural Networks Suffer from Crowding?
Crowding is a visual effect suffered by humans, in which an object that can be recognized in isolation can no longer be recognized when other objects, called flankers, are placed close to it. In this work, we study the effect of crowding in artificial Deep Neural Networks (DNNs) for object recognition. We analyze both deep convolutional neural networks (DCNNs) as well as an extension of DCNNs that are multi-scale and that change the receptive field size of the convolution filters with their position in the image. The latter networks, that we call eccentricitydependent, have been proposed for modeling the feedforward path of the primate visual cortex. Our results reveal that the eccentricity-dependent model, trained on target objects in isolation, can recognize such targets in the presence of flankers, if the targets are near the center of the image, whereas DCNNs cannot. Also, for all tested networks, when trained on targets in isolation, we find that recognition accuracy of the networks decreases the closer the flankers are to the target and the more flankers there are. We find that visual similarity between the target and flankers also plays a role and that pooling in early layers of the network leads to more crowding. Additionally, we show that incorporating flankers into the images of the training set for learning the DNNs does not lead to robustness against configurations not seen at training. | This paper studies if crowding, a visual effect suffered by human visual systems, happens to deep neural network as well. The paper systematically analyzes the performance difference when (1) clutter/flankers is present; (2) the similarity and proximity to the target; (3) when different architectures of the network is used.
Pros:
There are very few papers to study if various visual perceptual phenomenon exists in deep neural nets, or in vision algorithms in general. This paper studies the effect of crowding in DNN/DCNN image classification problem, and presents some interesting results which seems to suggest similar effect exists in DNN because of pooling layers merges nearby responses. And this is related to the theories of crowding in humans, which is also interesting. The paper also suggests what we should not do prematurely pooling if when designing architectures. In my opinion such papers should be encouraged.
Cons:
My main criticism to the paper is that it solely studies crowding in the context of image classification. However, if crowding is studied in the context of object detection, where the task is localize the object and recognize its category, the effect may be significantly lessened. For example, R-CNN proposes high saliency regions where the object might be and perform classification on that masked region. Because targets are usually centered in such proposed region and background clutters are excluded from the proposed region, the accuracy can potentially be much higher.
After all, the extent to which crowding is present in DNN depends a lot on the chosen architecture. And the architecture in this paper is very primitive compare to what researchers consider state-of-the-art these days, and the accuracy of the MNIST tasks reported by the paper are way lower than what most researchers would expect from a practical system. For example, [1] performs digit OCR which has much more clutters but with very high accuracy. It is not obvious architectures like that also suffer from crowding.
Suggestion:
The paper is overall easy to follow. I feel the experimental setup can be more clear if some more example images (non-cropped, like the ones in the Fig1 of supplementary material).
Overall, this paper has an interesting topic and is a nice read. The conclusions are not too strong because it uses simplistic architecture/datasets. But I think it is nonetheless this is a helpful paper to (re-)generate some interest on drawing relation between theories of human visual recognition and neural nets.
[1] Goodfellow, Ian J., et al. "Multi-digit number recognition from street view imagery using deep convolutional neural networks." arXiv preprint arXiv:1312.6082 (2013). |
nips_2017_1177 | Lookahead Bayesian Optimization with Inequality Constraints
We consider the task of optimizing an objective function subject to inequality constraints when both the objective and the constraints are expensive to evaluate. Bayesian optimization (BO) is a popular way to tackle optimization problems with expensive objective function evaluations, but has mostly been applied to unconstrained problems. Several BO approaches have been proposed to address expensive constraints but are limited to greedy strategies maximizing immediate reward. To address this limitation, we propose a lookahead approach that selects the next evaluation in order to maximize the long-term feasible reduction of the objective function. We present numerical experiments demonstrating the performance improvements of such a lookahead approach compared to several greedy BO algorithms, including constrained expected improvement (EIC) and predictive entropy search with constraint (PESC). | This paper seems a continuation of last year: Bayesian optimization with a finite budget... where the authors have added new elements to deal with inequality constraints. The method uses a approximation of a lookahead strategy by dynamic programming. For the constrained case, the authors propose an heuristic that combines the EIc criterion for all the steps except for the last one were the mean function is used. The authors claim that the mean function has an exploitative behaviour, although it has been previously shown that it might be misleading [A].
A considerably amount of the text, including Figure 1, can be mostly found in [16]. Although it is nice to have an self-contained paper as much as possible, that space could be used to explain better the selection of the acquisition heuristic and present alternatives. For example, the experiments should show the result a single-step posterior mean acquisition function, which will correspond to h=0 as a baseline for each of the functions/experiments.
Concerning the experiments, there are some details that should be improved or addressed in the paper:
- Why P2 does not include h=3?
- How is it possible that EIc for P2 goes upwards around n=25?
- The plots only shows median values without error bars. I can understand that for that, for such number of repetitions, the error bars might be small, but that should be adressed. Furthermore, a common problem of lookahead methods is that, while most of the time outperform greedy methods, they can also fail catastrophically for selecting the wrong path. This would results in some lookahead trials resulting in poor results. Using the average instead of the median would show also the robustness of the method.
- The number of iterations is fairly small for the standard in Bayesian optimization. In fact, it can bee seen that most methods have not reach convergence at that point. This is specially problematic in P1 and P4 where the plots seems to cross exactly at the end of experiments,
- It is unclear why the performance decreases after h=2. If possible, the authors should provide an intuition behind that results. Furthermore, the fact that for most of the experiments h=1 is the optimal strategy seems to indicate that the whole dynamic programing and heuristics might be excessive for CBO, or that more complex and higher-dimensional problems are need to illustrate the benefits of the strategy.
[A] Jones, Donald R. "A taxonomy of global optimization methods based on response surfaces." Journal of global optimization 21.4 (2001): 345-383. |
nips_2017_1645 | Decomposable Submodular Function Minimization Discrete and Continuous
This paper investigates connections between discrete and continuous approaches for decomposable submodular function minimization. We provide improved running time estimates for the state-of-the-art continuous algorithms for the problem using combinatorial arguments. We also provide a systematic experimental comparison of the two types of methods, based on a clear distinction between level-0 and level-1 algorithms. | This paper studies the problem of minimizing a decomposable submodular function. Submodular minimization is a well studied and important problem in machine learning for which there exist algorithms to solve the problem exactly. However, the running time of these algorithms is a high polynomial and they are thus oftentimes not practical. To get around this issue, submodular functions that can be decomposed and written as a sum of submodular functions over a much smaller support (DSFM) are often considered as they often appear in practice.
This paper improves the analysis of the fastest algorithms for DSFM by a factor equal to the number of functions in the decomposition. It also provides an experimental framework based on a distinction between “level 0” algorithms, which are subroutines for quadratic minimization, and “level 1” algorithms which minimize the function using level 0 as a black box. These allow a more meaningful comparison where the same level 0 algorithms are used to compare different algorithms. These experiments show a tradeoff between the discrete algorithms that require more calls to the level 0 subroutines and gradient methods with weaker requirements for level 0 but more computation for level 1.
The analysis is complex and relies on both discrete and continuous optimization techniques to meaningfully improve the running time of an important problem where the computational complexity is expensive. The experiments also highlight an interesting tradeoff which suggests that different algorithms should be used in different contexts for the running time of DSFM.
A weakness of the paper is that the writing is very dense and sometimes hard to follow. It would have been nice to have more discussion on the parameters kappa* and l* and the precise bounds in terms of these parameters. It would have also been nice to have some comparison on the theoretical bounds between RCDM, ACDM, and IBFS. |
nips_2017_2145 | Streaming Weak Submodularity: Interpreting Neural Networks on the Fly
In many machine learning applications, it is important to explain the predictions of a black-box classifier. For example, why does a deep neural network assign an image to a particular class? We cast interpretability of black-box classifiers as a combinatorial maximization problem and propose an efficient streaming algorithm to solve it subject to cardinality constraints. By extending ideas from Badanidiyuru et al. [2014], we provide a constant factor approximation guarantee for our algorithm in the case of random stream order and a weakly submodular objective function. This is the first such theoretical guarantee for this general class of functions, and we also show that no such algorithm exists for a worst case stream order. Our algorithm obtains similar explanations of Inception V3 predictions 10 times faster than the state-of-the-art LIME framework of Ribeiro et al. [2016]. | This paper proposes a new approach STREAK for maximizing weakly submodular functions. The idea is to collect several outputs of the Threshold Greedy algorithm, where the selection is based on a given threshold. The theoretical results of the Threshold Greedy algorithm and STREAK are verified sequentially. STREAK is also used to provide interpretable explanations for neural-networks and the empirical studies are given.
This is an interesting work. The streaming algorithm is novel and the analyses are elaborate. The problem constructed to prove the ineffectiveness of randomized streaming algorithms is ingenious. The experiments also show the superiority of STREAK. However, how to choose the proper \epsilon in STEAK? Figure 2(a) shows that by varying \epsilon, the algorithm can achieve a gradual tradeoff between speed and performance, but the result is trivial because with the increase of \epsilon, the time and space complexity will both increase and lead to better performance.
I have checked all the proofs, and believe that most of them are correct. However, there are also some places which are not clear.
1. In the proof of Lemma A.1, the choice of appropriate arrival order is unclear and how to derive the inequality at the end of the proof?
2. In the proof of Lemma A.9, what is the meaning of the first paragraph? It seems to have nothing to do with the proof in the second paragraph.
3. In the proof of Theorem 5.6, why m=0 implies I=\ emptyset? |
nips_2017_2519 | Selective Classification for Deep Neural Networks
Selective classification techniques (also known as reject option) have not yet been considered in the context of deep neural networks (DNNs). These techniques can potentially significantly improve DNNs prediction performance by trading-off coverage. In this paper we propose a method to construct a selective classifier given a trained neural network. Our method allows a user to set a desired risk level. At test time, the classifier rejects instances as needed, to grant the desired risk (with high probability). Empirical results over CIFAR and ImageNet convincingly demonstrate the viability of our method, which opens up possibilities to operate DNNs in mission-critical applications. For example, using our method an unprecedented 2% error in top-5 ImageNet classification can be guaranteed with probability 99.9%, and almost 60% test coverage. | The paper addresses the problem of constructing a classifier with the reject
option that has a desired classification risk and, at the same time, minimizes the
probability the "reject option". The authors consider the case when the
classifiers and an associate confidence function are both known and the task is
to determine a threshold on the confidence that determines whether the
classifier prediction is used or rejected. The authors propose an algorithm
finding the threshold and they provide a statistical guarantees for the method.
Comments:
- The authors should provide an exact definition of the task that they attempt
to solve by their algorithm. The definition on line 86-88 describes rather the
ultimate goal while the algorithm proposed in the paper solves a simpler
problem: given $(f,\kappa)$ find a threshold $\theta$ defining $g$ in equation
(3) such that (2) holds and the coverage is maximal.
- It seems that for a certain values of the input arguments (\delta,r^*,S_m,...)
the Algorithm 1 will always return a trivial solution. By trivial solution I
mean that the condition on line 10 of the Algorithm 1 is never satisfied and
thus all examples will be at the end in the "reject region". It seems to me that
for $\hat{r}=0$ (zero trn error) the bound B^* solving equation (4) can be
determined analytically as
$B^* = 1-(\delta/log_2(m))^{1/m}$.
Hence, if we set the desired risk $r^*$ less than the number $B^* =
1-(\delta/log_2(m))^{1/m}$ then the Algorithm 1 will always return a trivial
solution. For example, if we set the confidence $\delta=0.001$ (as in the
experiments) and the number of training examples is $m=500$ then the minimal
bound is $B^*=0.0180$ (1.8%). In turn, setting the desired risk $r^* < 0.018$
will always produce a trivial solution whatever data are used. I think this
issue needs to be clarified by the authors.
- The experiments should contain a comparison to a simple baseline that anyone
would try as the first place. Namely, one can find the threshold directly using
the empirical risk $\hat{r}_i$ instead of the sophisticated bound B^*. One would
assume that the danger of over-fitting is low (especially for 5000 examples used
in experiments) taking into account the simple hypothesis space (i.e. "threshold
rules"). Without the comparing to baseline it is hard to judge the practical
benefits of the proposed method.
- I'm missing a discussion of the difficulties connected to solving the
numerical problem (4). E.g. which numerical method is suitable and whether there
are numerical issues when evaluating the combinatorial coefficient for large m
and j.
Typos:
- line 80: (f,g)
- line 116: B^*(\hat{r},\delta,S_m)
- line 221: "mageNet" |
nips_2017_2518 | Accelerated First-order Methods for Geodesically Convex Optimization on Riemannian Manifolds
In this paper, we propose an accelerated first-order method for geodesically convex optimization, which is the generalization of the standard Nesterov's accelerated method from Euclidean space to nonlinear Riemannian space. We first derive two equations and obtain two nonlinear operators for geodesically convex optimization instead of the linear extrapolation step in Euclidean space. In particular, we analyze the global convergence properties of our accelerated method for geodesically strongly-convex problems, which show that our method improves the convergence rate from
Moreover, our method also improves the global convergence rate on geodesically general convex problems from O(1/k) to O(1/k 2 ). Finally, we give a specific iterative scheme for matrix Karcher mean problems, and validate our theoretical results with experiments. | Summary of the Paper
====================
The paper considers a geodesic generalization of Nesterov's accelerated gradient descent (AGD) algorithm for Riemannian spaces. Two versions are presented: geodesic convex case and geodesic strongly convex smooth case. The proposed algorithms are instantiated for Karcher mean problems, and are shown to outperform two previous algorithms (RGD, RSGD), which address the same setting, with randomized data.
Evaluation
==========
From theoretical point of view, finding a proper generalization for the momentum term (so as to be able to implement AGD) which maintains the same convergence rate for any Riemannian space is novel and very interesting. From practical point of view, it is not altogether clear when the overall running time reduces. Indeed, although this algorithm requires significantly smaller number of iterations, implementing the momentum term can be very costly (as opposed to Euclidean spaces). That said, the wide range of settings to which this algorithm potentially applies makes it appealing as a general mechanism, and may encourage further development in this direction. The paper is well-written and relatively easy-to-follow.
General Comments
================
- The differential geometrical notions and and other definitions used intensively throughout this paper may not be familiar for the typical NIPS reader. I would suggest making the definition section more tight and clean. In particular, the following are not seemed to be defined in the text: star-concave, star-convex, grad f(x), intrinsic inner-product, diameter of X, conic geometric optimization, retractions.
- Equations 4 and 5 are given without any intuition as how should one derive them. They seem to be placed somewhat out of the blue, and I feel like the nice figure provided by the authors, which could potentially explain them, is not addressed appropriately in the text.
Minor Comments
==============
L52 - redundant 'The'
L52+L101 - There is a great line of work which tries to give a more satisfying interpretation for AGD. Stating that the proximal interpretation as the main interpretation seems to me somewhat misleading.
L59 - Can you elaborate more on the computational complexity required for implementing the exponent function and the nonlinear momentum operator S.
L70 - why linearization of gradient-like updates are contributions by themselves.
L72 - classes?
L90 + L93 - The sentence 'we denote' is not clear.
L113 - the sentence 'In addition, different..' is a bit unclear..
L126 - besides the constraint on alpha, what other considerations are needed to be taken in order to set its value optimally?
L139 + L144 - Exp seems to be written in the wrong font.
L151 - The wording used in Lemma 3 is a bit unclear.
L155 - in Thm 1, consider restating the value of beta.
L165 - How does the upper bound depend on D? Also, here again, how is alpha should be set?
L177 - maybe geometrically -> geodesically?
L180 - In section 5, not sure I understand how the definition of S is instantiated for this case.
L181 - 'For the accelerated scheme in (4)' do you mean algorithm 1?
L183 - Y_k or y_k?
L193 - 'with the geometry' is not clear.
L201 - Redundant 'The'. Also, (2) should be (3)?
L219 - Can you explain more rigorously (or provide relevant pointers) why RGD is a good proxy for all the algorithm stated above?
Page 8 - Consider providing the actual time as well, as this benchmark does not take into account the per-iteration cost.
L225 - Why is C defined explicitly? |
nips_2017_292 | Phase Transitions in the Pooled Data Problem
In this paper, we study the pooled data problem of identifying the labels associated with a large collection of items, based on a sequence of pooled tests revealing the counts of each label within the pool. In the noiseless setting, we identify an exact asymptotic threshold on the required number of tests with optimal decoding, and prove a phase transition between complete success and complete failure. In addition, we present a novel noisy variation of the problem, and provide an information-theoretic framework for characterizing the required number of tests for general random noise models. Our results reveal that noise can make the problem considerably more difficult, with strict increases in the scaling laws even at low noise levels. Finally, we demonstrate similar behavior in an approximate recovery setting, where a given number of errors is allowed in the decoded labels. | In the pooled data problem there are p items, each of which is assigned 1 of d possible labels. The goal is to recover the vector of labels from n "pooled tests." In each of these tests, a "pool" (subset of the items) is chosen and we observe how many items in the pool have each label (but not which items have which). A random prior on the label vector is used: the proportion with which each label occurs in the population is fixed, and the true labelling is chosen independently among all labellings that exactly obey these proportions. This paper considers the non-adaptive case where all the pools (subsets) must be chosen in advance. The focus is mainly on the case of Bernoulli random pools: the incidence matrix of pools versus items has iid Bernoulli(q) entries for some fixed 0 < q < 1. The objective is exact recovery of the label vector, although extensions to approximate recovery are considered in the appendix.
This paper proves information-theoretic lower bounds for the pooled data problem, both in the noiseless setting described above and in a very general noisy model.
In the noiseless setting with Bernoulli random pools, they prove the following sharp phase transition. The regime is where the number of samples is n = c*p/log(p) for a constant c, with p tending to infinity. They find the exact constant C such that if c > C then exact recovery is possible with success probability tending to 1, and if c < C then any procedure will fail with probability tending to 1. The constant C depends on the number of labels and the proportions of each label (but not on the constant q which determines the pool sizes). The upper bound was known previously; this paper proves the tight lower bound (i.e. the impossibility result), improving upon a loose bound in prior work.
The sharp information-theoretic threshold above is not known to be achieved by any efficient (i.e. polynomial time) algorithm. Prior work (citation [4] in the paper) suggests that efficient algorithms require a different threshold (in which n scales proportionally to p).
The key technical insight used in this paper to prove the sharp lower bound is the following "genie argument." Let G be some subset of {1,..,d} (the label values), and imagine revealing the labels of all the items that have labels not in G. Now apply a more standard lower bound to this setting, and optimize G in order to get the strongest possible bound.
The other contribution of this paper is to the noisy setting, which has not previously been studied. This paper defines a very general noise model where each observation is corrupted via an arbitrary noisy channel. They give a lower bound for exact recovery in this setting, and specialize it to certain concrete cases (e.g. Gaussian noise). The proof of the lower bound is based on a variant of the above genie argument, combined with standard lower bounds based on Fano's inequality.
Unfortunately there are no existing upper bounds for the noisy setting, so it is hard to judge the strength of this lower bound. The authors do, however, argue for its efficacy in a few different ways. First, it recovers the sharp results for the noiseless case and also recovers known bounds for the "group testing" problem, which is essentially the special case of 2 labels: "defective" and "non-defective" (and you want to identify the defective items). Furthermore, the lower bound for Gaussian noise is strong enough to show an asymptotic separation between the noiseless and noisy cases: while in the noiseless case it is necessary and sufficient for n to be of order p/log(p), the noisy case requires n to be at least p*log(p).
In the conclusion, the authors pose the open question of finding upper bounds for the noisy setting and provide some justification as to why this seem difficult using existing techniques.
I think this is a good paper and I recommend its acceptance. The exposition is good. The sharp lower bound for the noiseless case is a very nice result, and the proof technique seems to be novel. The lower bound for the noisy case is also a nice contribution; although it is hard to judge whether or not it should be tight (since no upper bound is given), it is certainly nontrivial, using the genie argument to surpass standard methods.
Specific comment:
- I think there is a typo in Table 2: the heading "exact recovery" should be "noiseless." (If I understand correctly, the entire table pertains to exact recovery.) |
nips_2017_2813 | A Greedy Approach for Budgeted Maximum Inner Product Search
Maximum Inner Product Search (MIPS) is an important task in many machine learning applications such as the prediction phase of low-rank matrix factorization models and deep learning models. Recently, there has been substantial research on how to perform MIPS in sub-linear time, but most of the existing work does not have the flexibility to control the trade-off between search efficiency and search quality. In this paper, we study the important problem of MIPS with a computational budget. By carefully studying the problem structure of MIPS, we develop a novel Greedy-MIPS algorithm, which can handle budgeted MIPS by design. While simple and intuitive, Greedy-MIPS yields surprisingly superior performance compared to state-of-the-art approaches. As a specific example, on a candidate set containing half a million vectors of dimension 200, Greedy-MIPS runs 200x faster than the naive approach while yielding search results with the top-5 precision greater than 75%. | The aim of the paper is to propose a new greedy approach for Maximum Inner Product Search problem: given a candidate vector, retrieve a set of vectors with maximum inner product to the query vector. This is a crucial step in several machine learning and data mining algorithms, and the state of the art methods work in sub-linear time recently. The originality of the paper is to study the MIPS problem under a computational budget. The proposed approach achieves
better balance between search efficiency and quality of the retrieved vectors, and does not require a nearest neighbor search phase, as commonly done by state of the art approaches. The authors claim impressive runtime results (their algorithm is 200x faster than the naive approach), and a top-5 precision greater than 75%.
The paper is very dense (the space between two lines seems smaller than the one in the template). However, the paper is well-written and the procedure is well-explained. The proposed method seems also quite original, and comes with theoretical guarantees. The technical results seem sound.
Some remarks:
Figure 1 should be placed at the top of P.5, it is a bit difficult to follow without the later explanations.
The bound used in P.4 needs to be more studied, in order to find, for instance, some properties (or better, an approximation). This bound is a key point of this procedure, and it is used at the beginning.
P.5 "visit (j,t) entries of Z": (j,t) is a cell in the matrix Z, however you consider this notation as a number. Maybe "j \times t" entries?
The reviewer would be interested to have access to the source code of the algorithm and the data, so as he can reproduce the expriments? |
nips_2017_2482 | Convergence rates of a partition based Bayesian multivariate density estimation method
We study a class of non-parametric density estimators under Bayesian settings. The estimators are obtained by adaptively partitioning the sample space. Under a suitable prior, we analyze the concentration rate of the posterior distribution, and demonstrate that the rate does not directly depend on the dimension of the problem in several special cases. Another advantage of this class of Bayesian density estimators is that it can adapt to the unknown smoothness of the true density function, thus achieving the optimal convergence rate without artificial conditions on the density. We also validate the theoretical results on a variety of simulated data sets. | Note: Below, I use [#M] for references in the main paper and [#S] for references in the supplement, since these are indexed differently.
Summary: This paper proposes and analyzes a Bayesian approach to nonparametric density estimation. The proposed method is based on approximation by piecewise-constant functions over a binary partitioning of the unit cube, using a prior that decays with the size of the partition. The posterior distribution of the density is shown to concentrate around the true density f_0, at a rate depending on the smoothness r of f_0, a measure in terms of how well f_0 can be approximated by piecewise-constant functions over binary partitionings. Interestingly, the method automatically adapts to unknown r, and r can be related to more standard measures of smoothness, such as Holder continuity, bounded variation, and decay rate of Haar basis coefficients. As corollaries, posterior concentration rates are shown for each of these cases; in the Holder continuous case, this rate is minimax optimal.
Major Comments:
The theoretical results of this paper appear quite strong to me. In particular, the results on adaptivity (to unknown effective dimension and smoothness) are quite striking, especially given that this seems an intrinsic part of the estimator design, rather than an additional step (as in, e.g., Lepski's method). Unfortunately, I'm not too familiar with Bayesian or partition-based approaches to nonparametric density estimation, and my main issue is that the relation of this work to previous work isn't very clear. Looking through the supplement, it appears that most of proofs are based on the results of [9M] and [14M], so there is clearly some established closely related literature. I'd likely raise my score if this relation could be clarified, especially regarding the following two questions:
1) Is the prior introduced in Section 2.2 novel? If so, how does it differ from similar prior work (if such exists), and, if not, which aspects, precisely, of the paper are novel?
2) Much of the proof of Theorem 3.1 appears to depend on results from prior work (e.g., Lemmas A.2, A.4, B.1, and B.2, and Theorem A.3 are from [4S] and [6S]). Corrolaries 3.2, 3.3, and 3.4 all follow by combining Theorem 3.1 with results from [4S] and [1S] that relate the respective smoothness condition to smoothness in terms of approximability by piecewise-constant functions on binary partitions. Thus, much of this work appears to be a re-working of previous work, while Lemma B.4 and the remainder of Theorem 3.1 appear to be the main contributions. My question is: at a high level, what were the main limitations of the previous results/proof techniques that had to be overcome to prove Theorem 3.1, and, if it's possible to summarize, what were the main innovations required to overcome these?
Minor Comments:
Lines 36-44: It would help to mention here that rates are measured in Hellinger divergence.
Line 40-41: (the minimax rate for one-dimensional Hölder continuous function is (n/log n)^{−\beta/(2\beta+1)}): If I understand correctly, the log factors stem from the fact that \beta is unknown (I usually think of the minimax rate as n^{−\beta/(2\beta+1)}, when \beta is treated as known). If this is correct, it would help to mention this here.
Line 41: small typo: "continuous function" should be "continuous functions"
Lines 60-72: There's an issue with the current notation: As written, \Omega_2,...,\Omega_I aren't well-defined partition by this recursive procedure. If we split \Omega_j at step i, then \Omega_j should be removed from the list of partitions and replaced by two smaller partitions. I think I understand what is meant (Figure 1 is quite clear), but I don't see a great way to explain this with concise mathematical notation - the options I see are (a) describing the process as a tree with a set at each node, and then taking all the leaves of the tree or (b) using a pseudocode notation where the definition of \Omega_j can change over the course of the recursive procedure.
Line 185: I believe the minimax rate for the bounded variation class is of order n^(-1/3) (see, e.g., Birgé, Lucien. "Estimating a density under order restrictions: Nonasymptotic minimax risk." The Annals of Statistics (1987): 995-1012.) Perhaps this is worth mentioning?
Lines 157-175: Section 3.1 considers the case of a weak-\ell^p constraint on the Haar basis coefficients of the density. The paper calls this a spatial sparsity constraint. I feel this is misleading, since the sparsity assumption is over the Haar basis coefficients, rather than over spatial coordinates (as in [1M]). As a simple example, the uniform distribution is extremely sparse in the Haar basis, but is in no way spatially concentrated. I believe this is actually a smoothness assumption, since Haar basis coefficients can be thought of as identifying the discontinuities of a piecewise constant approximation function. Indeed, the weak-\ell^p constraint on the Haar coefficients is roughly equivalent to a bound on the Besov norm of the density (see Theorem 5.1 of Donoho, David L. "De-noising by soft-thresholding." IEEE transactions on information theory 41.3 (1995): 613-627.)
Line 246: small typo: "based adaptive partitioning" should be "based on adaptive partitioning".
Line 253: Citation [1M] is missing the paper title. I believe the intended citation is "Abramovich, F., Benjamini, Y., Donoho, D. L., & Johnstone, I. M. (2006). Special invited lecture: adapting to unknown sparsity by controlling the false discovery rate. The Annals of Statistics, 584-653." |
nips_2017_931 | Translation Synchronization via Truncated Least Squares
In this paper, we introduce a robust algorithm, TranSync, for the 1D translation synchronization problem, in which the aim is to recover the global coordinates of a set of nodes from noisy measurements of relative coordinates along an observation graph. The basic idea of TranSync is to apply truncated least squares, where the solution at each step is used to gradually prune out noisy measurements. We analyze TranSync under both deterministic and randomized noisy models, demonstrating its robustness and stability. Experimental results on synthetic and real datasets show that TranSync is superior to state-of-the-art convex formulations in terms of both efficiency and accuracy. | In this paper the authors describe a robust and scalable algorithm,TranSync, for solving the 1D translation synchronization problem. The algorithm is quite simple, to solve a truncated least squares at each iteration, and then the computational efficiency is superior to state-of-the-art methods for solving the linear programming formulation. On the other hand, the analyze TranSync under both deterministic and randomized noise models, demonstrating its robustness and stability. In particular, when the pair-wise measurement is biased, TranSync can still achieve sub-constant recovery rate, while the linear programming approach can tolerate no more
than 50% of the measurements being biased. The paper is very readable and
the proofs of main theorems are clear and appear correct. However, it will be good if the authors can provide more intuition behind the theorem. The numerical experiments are complete and clear. |
nips_2017_3285 | Solid Harmonic Wavelet Scattering: Predicting Quantum Molecular Energy from Invariant Descriptors of 3D Electronic Densities
We introduce a solid harmonic wavelet scattering representation, invariant to rigid motion and stable to deformations, for regression and classification of 2D and 3D signals. Solid harmonic wavelets are computed by multiplying solid harmonic functions with Gaussian windows dilated at different scales. Invariant scattering coefficients are obtained by cascading such wavelet transforms with the complex modulus nonlinearity. We study an application of solid harmonic scattering invariants to the estimation of quantum molecular energies, which are also invariant to rigid motion and stable with respect to deformations. A multilinear regression over scattering invariants provides close to state of the art results over small and large databases of organic molecules. | The paper presents the solid harmonic scattering, which creates a rotation invariant representation of 2D and 3D structures. The paper presents the details of the proposed transformation and derives its properties. The solid harmonic scattering is then used to predict the energy of molecules given the positions of individual atoms and their charges. A permutation invariant embedding of a molecule is first computed from positions and charges, and then the scattering transform is applied to obtain a rotation and translation invariance representation. This representation is used to predict the total energy with a linear regressor and a neural net with multiplicative gates. Experiments in the GDB7-12 dataset are performed and the results are competitive with other machine learning based approaches.
The problem of energy prediction is important, and the proposed transformation is interesting. The introduction makes the case of learning from data using the right operators (such as convolutions for images), and motivates the exploration of special operators to analyze other types of data, such as molecule structures. The authors implemented the solution in GPUs to accelerate computations. The formulation of the method seems interesting, but the following questions need to be answered to frame this work with the current research in machine learning:
* Although an elegant design, the proposed module does not have learnable parameters, and thus the contribution to machine learning appears limited. Convolutional filters have the capacity to adapt weights for each problem, while the proposed transform seems to be fixed. Even though it has connections to neural networks, the static nature of the solution goes in the opposite direction of designing machines that learn.
* The transform seems to be very specific for molecule representation. It would be interesting to see applications in other domains that would benefit from this transform. If the value of this work has more impact in the quantum physics community, perhaps NIPS is the wrong venue to discuss its merit? |
nips_2017_3260 | Clustering Stable Instances of Euclidean k-means
The Euclidean k-means problem is arguably the most widely-studied clustering problem in machine learning. While the k-means objective is NP-hard in the worst-case, practitioners have enjoyed remarkable success in applying heuristics like Lloyd's algorithm for this problem. To address this disconnect, we study the following question: what properties of real-world instances will enable us to design efficient algorithms and prove guarantees for finding the optimal clustering? We consider a natural notion called additive perturbation stability that we believe captures many practical instances of Euclidean k-means clustering. Stable instances have unique optimal k-means solutions that does not change even when each point is perturbed a little (in Euclidean distance). This captures the property that kmeans optimal solution should be tolerant to measurement errors and uncertainty in the points. We design efficient algorithms that provably recover the optimal clustering for instances that are additive perturbation stable. When the instance has some additional separation, we can design a simple, efficient algorithm with provable guarantees that is also robust to outliers. We also complement these results by studying the amount of stability in real datasets, and demonstrating that our algorithm performs well on these benchmark datasets. | The authors propose a notion of additive perturbation stability (APS) for Euclidean distances that maintain the optimal k-means clustering solution when each point in the data is moved by a sufficiently small Euclidean distance. I think the paper is rather interesting; however, the results of the paper are not very surprising.
Here are my comments regarding the paper:
(1) To my understanding, the results of Theorem 1.2 are only under the condition of APS. They only hold for the case of k=2 components and may lead to exponential dependence on $k$ components for large $k$. However, under the additional margin condition between any two pairs of cluster, we will able to guarantee the existence of polynomial algorithm on $k$. Can you provide a high level idea of how this additional assumption actually helps? Is it possible to have situation without that margin condition that there exists no algorithm that is polynomial in terms of $k$?
(2) I find the term $\Delta = (1/2-\epsilon)D$, where $D$ is the maximum distance between any pairs of means, is interesting. It also has a nice geometric meaning, which is the distance between any center to the apex of cone. I wonder whether this term $\Delta$ is intrinsic to the setting of APS? May it still be available under other perturbation notions?
(3) In the result of Theorem 1.3 and later Theorem 3.2, we both require the condition that $\rho$, which is a threshold for the margin between any two pairs of cluster, to be at least the oder of $\Delta/\epsilon^{2}$. As $\epsilon$ is sufficiently small, which is also the challenging theme of APS, the term $\Delta/\epsilon^{2}$ becomes very big, i.e., the distances between any two pairs of cluster become very large. The fact that there is an algorithm running in time polynomial of $k$ components, sample size $n$, and dimension $d$ under that setting is not surprising. Is it possible to improve the lower bound of the margin between any two pairs of cluster?
(4) In all the results with running time of the paper, the authors only provide the bound in terms of $n$, $d$, and $k$. I wonder about the constants that are along with these terms. How do they change with margin $\rho$ and $\epsilon$-APS?
(5) As the authors indicate in the paper, data in practice may have very small value of $\epsilon$. It seems that the conditions of the results with polynomial algorithms in the paper, e.g. Theorem 3.2, will not be satisfied by real data.
(6) It seems to me that the results in the paper can be extended to other distances that are different from Euclidean distance. The geometric explanation of additive perturbation stability may be different, which also leads to the modifications of other notions and results in the paper. Such extensions will be very useful in case that we want to work with kernel K-means to deal with sophisticated structures of data. |
nips_2017_2133 | On-the-fly Operation Batching in Dynamic Computation Graphs
Dynamic neural network toolkits such as PyTorch, DyNet, and Chainer offer more flexibility for implementing models that cope with data of varying dimensions and structure, relative to toolkits that operate on statically declared computations (e.g., TensorFlow, CNTK, and Theano). However, existing toolkits-both static and dynamic-require that the developer organize the computations into the batches necessary for exploiting high-performance algorithms and hardware. This batching task is generally difficult, but it becomes a major hurdle as architectures become complex. In this paper, we present an algorithm, and its implementation in the DyNet toolkit, for automatically batching operations. Developers simply write minibatch computations as aggregations of single instance computations, and the batching algorithm seamlessly executes them, on the fly, using computationally efficient batched operations. On a variety of tasks, we obtain throughput similar to that obtained with manual batches, as well as comparable speedups over singleinstance learning on architectures that are impractical to batch manually. | Summary:
The authors of this paper extend neural network toolkit DyNet with automatic operation batching. Batching enables efficient utilization of CPUs and GPUs by turning matrix-vector products into matrix-matrix products and reducing kernel launch overhead (for GPUs) but it is commonly done manually. Manual batching is manageable for simple feed-forward-networks but it becomes increasingly a headache as we explore more flexible models that take variable-length input, tree-structured input, or networks that perform dynamic control decisions.
Chainer, DyNet, and PyTorch are recently proposed neural network toolkits that allow user to dynamically define the computation graph using the syntax of the host language (if, while, etc in python). This is desirable as it avoids tookit specific constructions (e.g., cond in TensorFlow) and make the network definition intuitive but it tends to limit performance because the network construction and computation happens at the same time. Thus although it would be straightforward to program a control flow that supports variable length or tree-structured input in these toolkits, in practice it would be too inefficient to process single instance at a time.
The key contribution of this paper is to delay and automatically batch the computation so that a user can still define an arbitrary complex control flow using the host language as in original DyNet without worrying about operation batching. The approach is similar to TensorFlow Fold (TFF) but differs in two places: first, the computation graph is defined in a dynamic manner using the control flow of the host language and the user does not need to learn any new language (note that TFF introduces many new operations that are not in TensorFlow); second, it employs an agenda-based approach for operation batching in contrast to the depth-based approach employed in TFF. The authors empirically show that agenda-based batching is slightly more efficient than depth-based batching.
Detailed comments:
It wasn't clear to me if there is a stronger argument for dynamic definition of the computation graph other than the simplicity of coding compared to static definition of the graph in TFF, for example.
Would the lazy evaluation still work when the control decision depends on the computed value (this is the case for the training with exploration scenario [2, 4, 9]. In this case, static graph definition may have an advantage. |
nips_2017_3368 | Learning Linear Dynamical Systems via Spectral Filtering
We present an efficient and practical algorithm for the online prediction of discrete-time linear dynamical systems with a symmetric transition matrix. We circumvent the non-convex optimization problem using improper learning: carefully overparameterize the class of LDSs by a polylogarithmic factor, in exchange for convexity of the loss functions. From this arises a polynomial-time algorithm with a near-optimal regret guarantee, with an analogous sample complexity bound for agnostic learning. Our algorithm is based on a novel filtering technique, which may be of independent interest: we convolve the time series with the eigenvectors of a certain Hankel matrix. | Linear dynamical systems are a mainstay of control theory. Unlike the empirical morass of work that underlies much of machine learning work, e.g. deep learning, where there is little theory, and an attempt to produce general solutions of unknown reliability and quality, control theorists wisely have chosen an alternative course of action, where they focused on a simple but highly effective linear model of dynamics, which can be analyzed extremely deeply. This led to the breakthrough work many decades ago of Kalman filters, without which the moon landing would have been impossible.
This paper explores the problem of online learning (in the regret model) of dynamical systems, and improves upon previous work in this setting that was restricted to the single input single output (SISO) case [HMR 16]. Unlike that paper, the present work shows that regret bounded learning of an LDS is possible without making assumptions on the spectral structure (polynomially bounded eigengap), and signal source limitations.
The key new idea is a convex relation of the original non-convex problem, which as the paper shows, is "the central driver" of their approach. The basic algorithm is a variant of the original projected gradient method of Zinkevich from 2003, The method is viewed as a online wave filtered regression method (Algorithm 1), where pieces of the dynamical system are learned over time and stitched together into an overall model. The paper shows that optimization over linear maps is appropriate convex relaxation of the original objective function,.
The paper is well written and has an extensive theoretical analysis. It represents a solid step forward in the learning of dynamical systems models, and should spur further work on more difficult cases. The convex relaxation trick might be useful in other settings as well.
Some questions:
1. Predictive state representations (PSRs) are widely used in reinforcement learning to model learning from partially observed states. Does your work have any bearing on the learnability of PSRs, a longstanding challenge in the field.
2. There is quite a bit of work on kernelizing Kalman filters in various ways to obtain nonlinear dynamical systems. Does your approach extend to any of these extended models?
The approach is largely based on the simple projected gradient descent approach, but one wonders whether proximal gradient tricks that are so successful elsewhere (e.g, ADAGRAD, mirror prox etc.) would also be helpful here. In other words, can one exploit the geometry of the space to accelerate convergence? |
nips_2017_947 | A New Alternating Direction Method for Linear Programming
It is well known that, for a linear program (LP) with constraint matrix A ∈ R m×n , the Alternating Direction Method of Multiplier converges globally and linearly at a rate O(( A 2 F + mn) log(1/ )). However, such a rate is related to the problem dimension and the algorithm exhibits a slow and fluctuating "tail convergence" in practice. In this paper, we propose a new variable splitting method of LP and prove that our method has a convergence rate of O( A 2 log(1/ )). The proof is based on simultaneously estimating the distance from a pair of primal dual iterates to the optimal primal and dual solution set by certain residuals. In practice, we result in a new first-order LP solver that can exploit both the sparsity and the specific structure of matrix A and a significant speedup for important problems such as basis pursuit, inverse covariance matrix estimation, L1 SVM and nonnegative matrix factorization problem compared with the current fastest LP solvers. | This paper develops a novel alternating direction based method for linear programming problems. The paper presents global convergence results, and a linear rate, for their algorithm. As far as I could see, the mathematics appears to be sound, although I did not check thoroughly. Numerical experiments were also presented that support the practical benefits of this new approach; this new algorithm is compared with two other algorithms and the results seem favorable.
Note that the authors call their algorithm FADMM - my suggestion is that the authors choose a different acronym because (i) there are several other ADMM variants already called FADMM, and (ii) this is an ADMM for LP so it might be more appropriate to call it something like e.g., LPADMM, which is a more descriptive acronym.
The convergence results and numerical experiments are good, but the thing that I liked most about this paper is that the algorithm seems very practical because the first ADMM subproblem can be solved *inexactly*. This is a very nice feature. The authors also put effort to explain how to solve this first subproblem efficiently from a practical perspective (Section 5), which I felt was an important contribution of the paper.
One of the reasons that I did not give this paper a higher score is that there are a couple of typos in the paper and the descriptions and language could be clearer in places. For example, $n_b$ is used in (1) but is not defined; $n_f$ is used at the end of Section 2.1 but is not defined; the Hessian in Section 2.1 is not correct; throughout the paper the authors use minus signs instead of hyphens (e.g., $n-$dimensional should be $n$-dimensional, $l_1-$regularized should be $l_1$-regularized etc); the bibliography has several typos especially with $ signs missing: [4] should have $\ell_1$-problems; [11] D and R should be capitalized 'Douglas-Rachford'; [16,17] should be capitalized ADMM. All these are minor, but the bar for NIPS is very high and these should be corrected, and the paper should be thoroughly proofread for English mistakes.
Overall I liked this paper. |
nips_2017_1193 | Revisit Fuzzy Neural Network: Demystifying Batch Normalization and ReLU with Generalized Hamming Network
We revisit fuzzy neural network with a cornerstone notion of generalized hamming distance, which provides a novel and theoretically justified framework to re-interpret many useful neural network techniques in terms of fuzzy logic. In particular, we conjecture and empirically illustrate that, the celebrated batch normalization (BN) technique actually adapts the "normalized" bias such that it approximates the rightful bias induced by the generalized hamming distance. Once the due bias is enforced analytically, neither the optimization of bias terms nor the sophisticated batch normalization is needed. Also in the light of generalized hamming distance, the popular rectified linear units (ReLU) can be treated as setting a minimal hamming distance threshold between network inputs and weights. This thresholding scheme, on the one hand, can be improved by introducing double-thresholding on both positive and negative extremes of neuron outputs. On the other hand, ReLUs turn out to be non-essential and can be removed from networks trained for simple tasks like MNIST classification. The proposed generalized hamming network (GHN) as such not only lends itself to rigorous analysis and interpretation within the fuzzy logic theory but also demonstrates fast learning speed, well-controlled behaviour and state-of-the-art performances on a variety of learning tasks. | The authors use a notion of generalized hamming distance, to shed light on the success of Batch normalization and ReLU units.
After reading the paper, I am still very confused about its contribution. The authors claim that generalized hamming distance offers a better view of batch normalization and relus, and explain that in two paragraphs in pages 4,5. The explanation for batch normalization is essentially contained in the following phrase:
“It turns out BN is indeed attempting to compensate for deficiencies in neuron outputs with respect to GHD. This surprising observation indeed adheres to our conjecture that an optimized neuron should faithfully measure the GHD between inputs and weights.”
I do not understand how this is explaining the effects or performance of batch normalization.
The authors then propose a generalized hamming network, and suggest that "it demystified and confirmed effectiveness of practical
techniques such as batch normalization and ReLU".
Overall, this is a poorly written paper, with no major technical contribution, or novelty, and does not seem to provide any theoretical insights on the effectiveness of BN or ReLUs. Going beyond the unclear novelty and technical contribution, the paper is riddled with typos, grammar and syntax mistakes (below is a list from just the abstract and intro).
This is a clear rejection.
Typos and grammar/syntax mistakes:
—— abstract ——
generalized hamming network (GNN)
-> generalized hamming network (GHN)
GHN not only lends itself to rigiour analysis
-> GHN not only lends itself to rigorous analysis
“but also demonstrates superior performances”
-> but also demonstrates superior performance
——
—— intro ——
“computational neutral networks”
-> computational neural networks
“has given birth”
-> have given birth
“to rectifying misunderstanding of neural computing”
-> not sure what the authors are trying to say
Once the appropriate rectification is applied ,
-> Once the appropriate rectification is applied,
the ill effects of internal covariate shift is automatically eradicated
-> the ill effects of internal covariate shift are automatically eradicated
The resulted learning process
-> The resulting learning process
lends itself to rigiour analysis
-> lends itself to rigorous analysis
the flexaible knowledge
-> the flexible knowledge
are equivalent and convertible with other
-> are equivalent and convertible with others, or other architectures?
successful applications of FNN
-> successful applications of FNNs |
nips_2017_1192 | Interpretable and Globally Optimal Prediction for Textual Grounding using Image Concepts
Textual grounding is an important but challenging task for human-computer interaction, robotics and knowledge mining. Existing algorithms generally formulate the task as selection from a set of bounding box proposals obtained from deep net based systems. In this work, we demonstrate that we can cast the problem of textual grounding into a unified framework that permits efficient search over all possible bounding boxes. Hence, the method is able to consider significantly more proposals and doesn't rely on a successful first stage hypothesizing bounding box proposals. Beyond, we demonstrate that the trained parameters of our model can be used as word-embeddings which capture spatial-image relationships and provide interpretability. Lastly, at the time of submission, our approach outperformed the current state-of-the-art methods on the Flickr 30k Entities and the ReferItGame dataset by 3.08% and 7.77% respectively. | Summary
The approach proposes a simple method to find the globally optimal (under the formulation) box in an image which represents the grounding of the textual concept. Unlike previous approaches which adopt a two stage pipeline where region proposals are first extracted and then combined into grounded image regions (see [A] for example), this approach proposes a formulation which finds the globally optimal box for a concept. The approach assumes the presence of spatial heat maps depicting various concepts, and uses priors and geometry related information to construct an energy function, with learnable parameters for combining different cues. A restriction of the approach is that known concepts can only be combined linearly with each other (for instance the score map of “dirt bike” is “dirt” + “bike” with the learn weighting ofcourse), but this also allows for optimal inference for the given model class. More concretely the paper proposes an efficient technique based on [22] to use branch and bound for efficient sub-window search. Training is straightforward and clean through cutting-plane training of structured SVM. The paper also shows how to do efficient loss augmented inference during SVM training which makes the same branch and bound approach applicable to cutting-plane training as well. Finally, results are shown against competitive (near- state of the art approaches) on two datasets where the proposed approach is shown to outperform the state of the art.
Strengths
- Approach alleviates the need for a blackbox stage which generates region proposals.
- The interpretation of the weights of the model and the concepts as word embeddings is a neat little tidbit.
- The paper does a good job of commenting on cases where the approach fails, specifically pointing out some interesting examples such as “dirt bike” where the additive nature of the feature maps is a limitation.
- The paper has some very interesting ideas such as the use of the classic integral images technique to do efficient inference using branch and bound, principled training of the model via. a clean application of Structural SVM training with the cutting plane algorithm etc.
Weakness
1. Paper misses citing a few relevant recent related works [A], [B], which could also benefit from the proposed technique and use region proposals.
2. Another highly relevant work is [C] which does efficient search for object proposals in a similar manner to this approach building on top of the work of Lampert et.al.[22]
3. It is unclear what SPAT means in Table. 2.
4. How was Fig. 6 b) created? Was it by random sub-sampling of concepts?
5. It would be interesting to consider a baseline which just uses the feature maps (used in the work, say shown in Fig. 2) and the phrases and simply regresses to the target coordinates using an MLP. Is it clear that the proposed approach would outperform it? (*)
6. L130: It was unclear to me how the geometry constraints are exactly implemented in the algorithm, i.e. the exposition of how the term k2 is computed was uncler. It would be great to provide details. Clear explanation of this seems especially important since the performance of the system seems highly dependent on this term (as it is trivial to maximize the sum of scores of say detection heat maps by considering the entire image as the set).
Preliminary Evaluation
The paper has a neat idea which is implemented in a very clean manner, and is easy to read. Concerns important for the rebuttal are marked with (*) above.
[A] Hu, Ronghang, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, and Kate Saenko. 2016. “Modeling Relationships in Referential Expressions with Compositional Modular Networks.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1611.09978.
[B] Nagaraja, Varun K., Vlad I. Morariu, and Larry S. Davis. 2016. “Modeling Context Between Objects for Referring Expression Understanding.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1608.00525.
[C] Sun, Qing, and Dhruv Batra. 2015. “SubmodBoxes: Near-Optimal Search for a Set of Diverse Object Proposals.” In Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, 1378–86. Curran Associates, Inc. |
nips_2017_3095 | Learning Non-Gaussian Multi-Index Model via Second-Order Stein's Method
We consider estimating the parametric components of semiparametric multi-index models in high dimensions. To bypass the requirements of Gaussianity or elliptical symmetry of covariates in existing methods, we propose to leverage a second-order Stein's method with score function-based corrections. We prove that our estimator achieves a near-optimal statistical rate of convergence even when the score function or the response variable is heavy-tailed. To establish the key concentration results, we develop a data-driven truncation argument that may be of independent interest. We supplement our theoretical findings with simulations. | Summary: This article studies the estimation problem of single index and multiple index models in high dimensional setting. Using the formulation of second order Stein's lemma with sparsity assumptions, they propose an estimator formulated as a solution to a sparsity constrained semi-definite programming. The statistical rate of convergence of the estimator is derived and a numerical example is given.
Overall, I find the article interesting and useful, but their emphasis on "heavy tail" case and also partly non-Gaussianity is rather intriguing. Looking at the details, the main contribution the authors claim seems to be related to an extension of an optimality result for the estimation of sparse PCA from sub-Gaussian case to essentially sub-Exponential case through a Bernstein inequality. It would be helpful to clarify this point better.
Detailed comments are summarized below.
(a) Stein's lemma: I think one of the first references to an estimator based on Stein's lemma is H\"ardle and Stoker (1989), using the so-called "method of average derivatives", which does not require Gaussianity.
(b) Derivative condition: Normally the condition on the second derivative would be stronger than that on the first derivative, yet, the article suggests the other way around. Is there any other place to pay the price, or does it really provide more general framework?
(c) Heavy tail case: The notation of "heavy-tail" (as a pareto-like tail to most readers, I suspect, e.g. Barthe et al., 2005) is confusing and misleading in this context. Especially, I would think the heavy tail problem in economics has more to do with extremes and I doubt that the new approach is applicable in that context. Hence, I would suggest to remove the reference to the heavy-tail part, and instead make a better link to the contributions in terms of a relaxation of the tail conditions.
(d) Moment condition: In fact, I wonder how the moment condition on the score function could be translated in terms of the standard moment conditions (and tail conditions) on the original random variables. It would be helpful to demonstrate this point, and perhaps give a condition to make those assumptions comparable.
(e) Relation to Sparse PCA: Given the proximity to the formulation of the estimator to sparse PCA, I was imagining that a similar technique of the proofs from sparse PCA would have been used here, (except utilizing another (Berstein-type) concentration inequality), however, the relation to sparse PCA was presented as a consequence of their results. Then, in addition to the similarity to sparse PCA, could you also demonstrate a fundamental difference in techniques/considerations, if any, (other than the moment condition, or to overcome the moment condition?) required to deal with estimation problems of single index and multiple index models?
(f) Numerical algorithm: The algorithm to compute the estimator is mentioned only in Section 5. I would suggest to include some computational details in Section 3 as well, perhaps after (3.6).
(g) Minimum eigenvalue condition: I suppose without sparsity assumption, the minimum eigenvalue (line 174, p.5) should be zero for high dimensional case when d > k. Then, instead of "Finally" to introduce this condition, which seems to suggest that they are unrelated, would the connection need to be emphasized here as well?
Minor comments and typos:
(i) line 54, p2: (... pioneered by Ker-Chau Li, ...): From the flow of the arguments, it would be natural to add the references [20,21,22] here.
(ii) line 75, p2: I understood "SDP" as "semi-definite programming" in the end, but it would be better to be clear from the beginning here.
(iii) equation (4.1) p.4: typo.
(iv) heck references: the style of references is not coherent.
References:
Barthe, F., Cattiaux, P. and Roberto, C. (2005) Concentration for independent random variables with heavy tails, Applied Mathematics Research Express (2). 39-60.
H\"ardle, W. and Stoker, T.M. (1989) Investigating smooth multiple regression by the method of average derivatives, JASA 84 (408), 986-995. |
nips_2017_1148 | Practical Bayesian Optimization for Model Fitting with Bayesian Adaptive Direct Search
Computational models in fields such as computational neuroscience are often evaluated via stochastic simulation or numerical approximation. Fitting these models implies a difficult optimization problem over complex, possibly noisy parameter landscapes. Bayesian optimization (BO) has been successfully applied to solving expensive black-box problems in engineering and machine learning. Here we explore whether BO can be applied as a general tool for model fitting. First, we present a novel hybrid BO algorithm, Bayesian adaptive direct search (BADS), that achieves competitive performance with an affordable computational overhead for the running time of typical models. We then perform an extensive benchmark of BADS vs. many common and state-of-the-art nonconvex, derivativefree optimizers, on a set of model-fitting problems with real data and models from six studies in behavioral, cognitive, and computational neuroscience. With default settings, BADS consistently finds comparable or better solutions than other methods, including 'vanilla' BO, showing great promise for advanced BO techniques, and BADS in particular, as a general model-fitting tool. | This paper presents a new optimization methods that combines Bayesian optimization applied locally with concepts from MADS to provide nonlocal exploration. The main idea of the paper is to find an algorithm that is suitable for the range of functions that are slightly expensive, but not enough to require the sample efficiency of standard Bayesian optimization. The authors applied this method for maximum likelihood computations within the range of a ~1 second.
A standard critique to Bayesian optimization methods is that they are very expensive due to the fact that they rely on a surrogate model, like a Gaussian process that has a O(n^3) cost. The method presented in this paper (BADS) also rely on a GP. This paper solves the issue by computing the GP only of a local region, limited to 50+10D points.
The paper ignores all the work that has been done in Bayesian optimization with much more efficient surrogate models, like random forests [A], Parzen estimators [B] or treed GPs [7], where available software shows that the computational cost is comparable to the one from BADS. It is known that those methods have worse global performance than GP based BO for problems in R^n, but given that this method uses local approximation, I would assume that the performance per iteration is also lower that GP-BO.
Furthermore, because the main objective of BO is sample efficiency, some of the problems presented here could be solved with 50-100 iterations. Thus, being even more efficient that BADS as they would not require extra steps. In fact, optimized BO software [C] has a computational cost similar to the one reported here for BADS for the first 100 iterations. Note that [C] already have the rank-one updates implemented as suggested in section 3.2.
The way the results are presented leaves some open questions:
- Is the error tolerance relative or absolute? Are the problems normalized? What does it mean an error larger than 10? Is it 10%?
- Given the fact that at the end of the plot, there is still 1/3 of the functions are unsolved. Without seeing the actual behaviour of the optimizers for any function, it is impossible to say if they were close, stuck in a local optimum, or completely lost...
- Why using different plots for different problems? Why not doing the noisy case versus evaluations?
- How is defined the "...effective performance of BADS by accounting for the extra cost..." from Figure 2?
- If BO is used as a reference and compared in the text in many parts, why it is not included in the experiments? If the authors think it should not naively included for the extra cost, they could also "account for the extra cost", like in Figure 2.
There is also a large set of optimization algorithms, mainly in the evolutionary computation community, that relies on GPs and similar models for local or global modeling. For example: [D, E] and references therein.
[A] Frank Hutter, Holger Hoos, and Kevin Leyton-Brown (2011). Sequential model-based optimization for general algorithm configuration, Learning and Intelligent Optimization
[B] Bergstra, James S., Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. "Algorithms for hyper-parameter optimization." In Advances in Neural Information Processing Systems, pp. 2546-2554. 2011.
[C] Ruben Martinez-Cantin (2014) BayesOpt: A Bayesian Optimization Library for Nonlinear Optimization, Experimental Design and Bandits. Journal of Machine Learning Research, 15:3735-3739.
[D] Zhou, Zongzhao, Yew Soon Ong, Prasanth B. Nair, Andy J. Keane, and Kai Yew Lum. "Combining global and local surrogate models to accelerate evolutionary optimization." IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 37, no. 1 (2007): 66-76.
[E] Jin, Yaochu. "Surrogate-assisted evolutionary computation: Recent advances and future challenges." Swarm and Evolutionary Computation 1, no. 2 (2011): 61-70. |
nips_2017_2649 | GibbsNet: Iterative Adversarial Inference for Deep Graphical Models
Directed latent variable models that formulate the joint distribution as p(x, z) | The authors proposed an extension over the Adverserially Learnt Inference(ALI) GAN that cycles between the latent and visible space for a few steps. The model iteratively refines the latent distribution by alternating the generator and approximate inference model in a chain computation. A joint distribution of both latent and visible data is then learnt by backpropagating through the iterative refinement process. The authors empirically demonstrated their model on a few imprint tasks.
Strength:
- The paper is well-organized and is easy to follow.
- The experimental results on semi-supervised learning are encouraging. (more on that see the comment below. )
Weakness:
- The main objection I have with the paper is that the authors did not put in any effort to quantitatively evaluate their newly proposed GAN training method. Comparing the inception score on CIFAR-10 with ALI and other benchmark GAN methods should be a must. The authors should also consider estimating the actual log likelihood of their GAN model by running the evaluation method proposed:"On the Quantitative Analysis of Decoder-Based Generative Models", Wu et al. The bottom line is that without appropriate quantitative analysis, it is hard to evaluate how well the proposed method does in general. What should help is to see a plot where an x-axis is the number of Gibbs steps and y-axis as one of the quantitative measures.
- The improvement of the proposed method seems to be very marginal compared to ALI. The appropriate baseline comparison should be a deeper ALI model that has 2N number of layers. The "Gibbs chain" used throughout this paper is almost like a structured recurrent neural network with some of the intermediate hidden layers actually represents x and z. So, it is unfair to compare a 3-step GibbsNet with a 2-layer feedforward ALI model.
===================
After I have read the rebuttal from the author, I have increased my score to reflect the new experiments conducted by the authors. The inception score results and architecture comparisons have addressed my previous concerns on evaluation.
I am still concerned regarding the experimental protocols. The exact experimental setup for the semi-supervised learning results was not explained in detail. I suspect the GibbsNet uses a very different experimental protocol for SVHN and MNIST than the original ALI paper. It is hard to evaluate the relative improvement over ALI if the protocols are totally different. It is necessary to include all the experimental details in a future revision for reproducibility. |
nips_2017_3321 | The Scaling Limit of High-Dimensional Online Independent Component Analysis
We analyze the dynamics of an online algorithm for independent component analysis in the high-dimensional scaling limit. As the ambient dimension tends to infinity, and with proper time scaling, we show that the time-varying joint empirical measure of the target feature vector and the estimates provided by the algorithm will converge weakly to a deterministic measured-valued process that can be characterized as the unique solution of a nonlinear PDE. Numerical solutions of this PDE, which involves two spatial variables and one time variable, can be efficiently obtained. These solutions provide detailed information about the performance of the ICA algorithm, as many practical performance metrics are functionals of the joint empirical measures. Numerical simulations show that our asymptotic analysis is accurate even for moderate dimensions. In addition to providing a tool for understanding the performance of the algorithm, our PDE analysis also provides useful insight. In particular, in the high-dimensional limit, the original coupled dynamics associated with the algorithm will be asymptotically "decoupled", with each coordinate independently solving a 1-D effective minimization problem via stochastic gradient descent. Exploiting this insight to design new algorithms for achieving optimal trade-offs between computational and statistical efficiency may prove an interesting line of future research. | The paper studies the high-dimensional scaling limit of a stochastic update algorithm for online Independent Component Analysis.
The main result of the paper is an exact characterization of the evolution of the joint empirical distribution of the estimates
output by the algorithm and the signal to be recovered, when the number of observations scales linearly with the dimension of the problem.
The authors argue that in the limit, this joint distribution is the unique solution to a certain partial differential equation,
from which the performance of the algorithm can be predicted, in accordance with the provided simulations.
1- Overall, the result is fairly novel, and provides interesting insight into the behavior of stochastic algorithms in this non-convex problem.
The paper is written in a fairly clear manner and is (mostly) easy to read.
My main concern is that the mathematics are written in a very informal way, so that it is not clear what
the authors have actually proved. There are no theorem or proposition statements.
E.g., under what conditions does the empirical measure have a weak limit?
Under what (additional) conditions does this limit have a density?
And when is this density the unique solution to the PDE?
The heuristic justification in the appendix is not very convincing either (see bullet 4).
2- A very interesting insight of the theory is that the p variables of the problem decouple in the limit,
where each variable obeys a diffusion equation independently of the others.
The only global aspect retained by these local dynamics is via the order parameters R and Q.
This phenomenon is standard in mean-field models of interacting particle systems, where the dynamics of the particles decouple when the order parameters converge to a limit.
The authors should probably draw this connection explicitly; the relevant examples are the Sherrington-Kirkpatrick model, rank-one estimation in spiked models, compressed sensing, high-dim. robust estimation…
3- The connection to the SDE line 175 should be immediate given that the PDE (5) is its associated Fokker-Planck equation.
The iteration (14) is easier seen as a discretization of the SDE rather than as some approximate dynamics solving the PDE. Therefore, the SDE should come immediately after the statement of the PDE, then the iteration.
4- The derivations in the appendix proceed essentially by means of a cavity (or a leave-one-out) method,
but end abruptly without clear conclusion. The conclusion seems to be the decoupled iteration (14), not the actual PDE.
This should be made clear, or otherwise, the derivation should be conducted further to the desired conclusion.
The derivation contain some errors and typos; I couldn't follow the computations
(e.g., not clear how line 207 was obtained, a factor c_k should appear next to Q_k^p on many lines...)
5-There are several imprecisions/typos in the notation:
line 36: support recover*y*
eq 2: sign inconsistency in front of \tau_k (- gradient, not +)
line 100: x_k should be x
line 133: u should be \xi (in many later occurrences)
eq 8: \tau not defined (is it the limit of the step sizes \tau_k?)
line 116: t = [k/p] should be k = [tp] |
nips_2017_3147 | Invariance and Stability of Deep Convolutional Representations
In this paper, we study deep signal representations that are near-invariant to groups of transformations and stable to the action of diffeomorphisms without losing signal information. This is achieved by generalizing the multilayer kernel introduced in the context of convolutional kernel networks and by studying the geometry of the corresponding reproducing kernel Hilbert space. We show that the signal representation is stable, and that models from this functional space, such as a large class of convolutional neural networks, may enjoy the same stability. | The primary focus of the paper is CKN (convolutional kernel network) [13, 14]. In this manuscript the authors analyse the stability [w.r.t. C^1 diffeomorphisms (such as translation), in the sense of Eq. (4)] of the representation formed by CKNs. They show that for norm-preserving and non-expansive kernels [(A1-A2) in line 193] stability holds for appropriately chosen patch sizes [(A3)]. Extension from (R^d,+) to locally compact groups is sketched in Section 4.
The paper is nicely organized, clearly written, technically sound, combining ideas from two exciting areas (deep networks and kernels). The stability result can be of interest to the ML community.
-The submission would benefit from adding further motivation on the stability analysis. Currently there is only one short sentence (line 56-57: 'Finally, we note that the Lipschitz stability of deep predictive models was found to be important to get robustness to adversarial examples [7].') which motivates the main contribution of the paper.
-Overloading the \kappa notation in (A3) [line 193] might be confusing, it also denotes a function defining kernel K in Eq. (10).
-In the displayed equation between line 384 and 385, the second part ('and \forall v...') is superfluous; given the symmetry of kernel k, it is identical to the first constraint ('\forall u ...').
-Line 416: the definition of \phi is missing, it should be introduced in Eq. (10) [=<\phi(z),\phi(z')>_{H(K)}].
-Line 427-428: The inequality under '=' seems to also hold with equality, | ||z|| - ||z|| |^2 should be | ||z|| - ||z'|| |^2.
References:
[3,6,8,13,14,18,25-27,30,31]: page information is missing.
[9]: appeared -> Amit Daniely, Roy Frostig, Yoram Singer. Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity. Advances in Neural Information Processing Systems (NIPS), pages 2253-2261, 2016.
[17]: appeared -> Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Bernhard Sch{\"o}lkopf. Kernel Mean Embedding of Distributions: A Review and Beyond. Foundations and Trends in Machine Learning, 10(1-2): 1-141.
[19]: appeared -> Anant Raj, Abhishek Kumar, Youssef Mroueh, Tom Fletcher, Bernhard Sch{\"o}lkopf. International Conference on Artificial Intelligence and Statistics (AISTATS), PMLR 54:1225-1235, 2017.
[32]: accepted (https://2017.icml.cc/Conferences/2017/Schedule?type=Poster) -> Yuchen Zhang, Percy Liang, Martin Wainwright. Convexified Convolutional Neural Networks. International Conference on Machine Learning (ICML), 2017, accepted. |
nips_2017_1226 | Alternating minimization for dictionary learning with random initialization
We present theoretical guarantees for an alternating minimization algorithm for the dictionary learning/sparse coding problem. The dictionary learning problem is to factorize vector samples y 1 , y 2 , . . . , y n into an appropriate basis (dictionary) A * and sparse vectors x 1 * , . . . , x n * . Our algorithm is a simple alternating minimization procedure that switches between 1 minimization and gradient descent in alternate steps. Dictionary learning and specifically alternating minimization algorithms for dictionary learning are well studied both theoretically and empirically. However, in contrast to previous theoretical analyses for this problem, we replace a condition on the operator norm (that is, the largest magnitude singular value) of the true underlying dictionary A * with a condition on the matrix infinity norm (that is, the largest magnitude term). Our guarantees are under a reasonable generative model that allows for dictionaries with growing operator norms, and can handle an arbitrary level of overcompleteness, while having sparsity that is information theoretically optimal. We also establish upper bounds on the sample complexity of our algorithm.
Erratum, August 7, 2019: An earlier version of this paper appeared in NIPS 2017 which had an erroneous claim about convergence guarantees with random initialization. The main resultTheorem 3 -has been corrected by adding an assumption about the initialization (Assumption B1). | This paper proposes and analyzes an alternating minimization-based algorithm to recover the dictionary matrix and sparse coefficient matrix in a dictionary learning setting. A primary component of the contribution here comes in the form of an alternate analysis of the matrix uncertainty (MU) selector of Belloni, Rosenbaum, and Tsybakov, to account for worst-case rather than probabilistic corruptions.
Pros:
+ The flavor of the contribution here seems to improve (i.e., relax) the conditions under which methods like this will succeed, relative to existing works. Specifically, the motivation and result of this work amounts to specifying sufficient conditions on the vectorized infinity norm of the unknown dictionary matrix, rather than its operator norm, under which provable recovery is possible. This has the effect of making the method potentially less dependent on ambient dimensions, especially for "typical" constructions of the (incoherent) dictionaries such as certain random generations.
+ The alternate analysis of the MU selector is independently interesting.
Cons:
- It would be interesting to see some experimental validation of the proposed method, especially ones that investigate the claimed improvements in the conditions on the unknown dictionary relative to prior efforts. In other words, do the other efforts that state results in terms of the operator norm fail in settings where this method succeeds? Or are the methods all viable, but just a more refined analysis here? This is hard to determine here, and should be explored a bit, I think.
- The paper is hard to digest, partly because of notation, and partly because of some pervasive grammatical and formatting issues:
- Line 8 of algorithm 1, as written, seems to require knowledge of the true A,x quantities to compute. In reality, it seems this should be related somehow to the samples {y} themselves. Can this be written a bit more clearly?
- Condition (c4) on page 6, line 206 is confusing as written. The dimension of x is r, and it is s-sparse, so there are more than r options for *sets* of size s; this should be r-choose-s, I guess. The subsequent conditions are apparently based on this kind of model, and seem to be correct.
- Why include the under brace in the first equation of line 251 on page 7? Also, repeating the LHS is a little non-standard.
- The "infinite samples" analysis is a little strange to me, too. Why not simply present and analyze the algorithm (in the main body of the paper) in terms of the finite sample case? The infinite case seems to be an analytical intermediate step, not a main contribution in itself.
- There are many sentence fragments that are hard to parse, e.g., "Whereas..." on line 36 page 2, "While..." on line 77 page 2, and "Given..." on line 131 page 4. |
nips_2017_3588 | Multi-view Matrix Factorization for Linear Dynamical System Estimation
We consider maximum likelihood estimation of linear dynamical systems with generalized-linear observation models. Maximum likelihood is typically considered to be hard in this setting since latent states and transition parameters must be inferred jointly. Given that expectation-maximization does not scale and is prone to local minima, moment-matching approaches from the subspace identification literature have become standard, despite known statistical efficiency issues. In this paper, we instead reconsider likelihood maximization and develop an optimization based strategy for recovering the latent states and transition parameters. Key to the approach is a two-view reformulation of maximum likelihood estimation for linear dynamical systems that enables the use of global optimization algorithms for matrix factorization. We show that the proposed estimation strategy outperforms widely-used identification algorithms such as subspace identification methods, both in terms of accuracy and runtime. | This paper proposes an efficient maximum likelihood algorithm for parameter estimation in linear dynamical systems. The problem is reformulated as a two-view generative model with a shared latent factor, and approximated as a matrix factorization problem. The paper then proposes a novel proximal update. Experiments validate the effectiveness of the proposed method.
The paper realizes that maximum likelihood style algorithms have some merit over classical moment-matching algorithms in LDS, and wants to solve the efficiency problem of existing maximum likelihood algorithms. Then the paper proposes a theoretical guaranteed proximal update to solve the optimization problem.
However, I do not understand why the paper tell the story from a two-view aspect. In LDS, we can construct x_t from two ways. One is from \phi_t and the other is from \phi_{t-1}. Eq.4 is minimizing the reconstruction error for x_t constructed in both ways, with regularization on \Phi, C and A. This is a general framework, which is a reconstruction loss plus regularization, widely used in many classical machine learning algorithms. I do not see much novelty in the proposed learning objective Eq.4, and I cannot tell the merit of stating the old story from a two-view direction.
Lemma 1 proposes a method to transform the nonconvex problem into a convex one. However, I cannot see any benefit of transforming Eq.4 into a convex problem. The new learning objective Z in Lemma 1 is just a de-noised version of x_t. The original low-dimensional latent space estimation problem now becomes the problem directly estimating the de-noised high-rank observed state. Thus all merits from low-dimensional assumption are not available now. Since the proposed model is still non-convex, what is its merit compared to classical non-convex matrix factorization style algorithms in LDS? |
nips_2017_1521 | Parallel Streaming Wasserstein Barycenters
Efficiently aggregating data from different sources is a challenging problem, particularly when samples from each source are distributed differently. These differences can be inherent to the inference task or present for other reasons: sensors in a sensor network may be placed far apart, affecting their individual measurements. Conversely, it is computationally advantageous to split Bayesian inference tasks across subsets of data, but data need not be identically distributed across subsets. One principled way to fuse probability distributions is via the lens of optimal transport: the Wasserstein barycenter is a single distribution that summarizes a collection of input measures while respecting their geometry. However, computing the barycenter scales poorly and requires discretization of all input distributions and the barycenter itself. Improving on this situation, we present a scalable, communication-efficient, parallel algorithm for computing the Wasserstein barycenter of arbitrary distributions. Our algorithm can operate directly on continuous input distributions and is optimized for streaming data. Our method is even robust to nonstationary input distributions and produces a barycenter estimate that tracks the input measures over time. The algorithm is semi-discrete, needing to discretize only the barycenter estimate. To the best of our knowledge, we also provide the first bounds on the quality of the approximate barycenter as the discretization becomes finer. Finally, we demonstrate the practical effectiveness of our method, both in tracking moving distributions on a sphere, as well as in a large-scale Bayesian inference task.
barycenter of the input measures [1], and should be thought of as an aggregation of the input measures which preserves their geometry. This particular aggregation enjoys many nice properties: in the earlier Bayesian inference example, aggregating subset posterior distributions via their Wasserstein barycenter yields guarantees on the original inference task [47].
If the measures µ j are discrete, their barycenter can be computed relatively efficiently via either a sparse linear program [2], or regularized projection-based methods [16,7,51,17]. However, 1. these techniques scale poorly with the support of the measures, and quickly become impractical as the support becomes large. 2. When the input measures are continuous, to the best of our knowledge the only option is to discretize them via sampling, but the rate of convergence to the true (continuous) barycenter is not well-understood. These two confounding factors make it difficult to utilize barycenters in scenarios like parallel Bayesian inference where the measures are continuous and a fine approximation is needed. These are the primary issues we work to address in this paper.
Given sample access to J potentially continuous distributions µ j , we propose a communicationefficient, parallel algorithm to estimate their barycenter. Our method can be parallelized to J worker machines, and the messages sent between machines are merely single integers. We require a discrete approximation only of the barycenter itself, making our algorithm semi-discrete, and our algorithm scales well to fine approximations (e.g. n ⇡ 10 6 ). In contrast to previous work, we provide guarantees on the quality of the approximation as n increases. These rates apply to the general setting in which the µ j 's are defined on manifolds, with applications to directional statistics [46]. Our algorithm is based on stochastic gradient descent as in [22] and hence is robust to gradual changes in the distributions: as the µ j 's change over time, we maintain a moving estimate of their barycenter, a task which is not possible using current methods without solving a large linear program in each iteration.
We emphasize that we aggregate the input distributions into a summary, the barycenter, which is itself a distribution. Instead of performing any single domain-specific task such as clustering or estimating an expectation, we can simply compute the barycenter of the inputs and process it later any arbitrary way. This generality coupled with the efficiency and parallelism of our algorithm yields immediate applications in fields from large scale Bayesian inference to e.g. streaming sensor fusion.
Contributions. 1. We give a communication-efficient and fully parallel algorithm for computing the barycenter of a collection of distributions. Although our algorithm is semi-discrete, we stress that the input measures can be continuous, and even nonstationary. 2. We give bounds on the quality of the recovered barycenter as our discretization becomes finer. These are the first such bounds we are aware of, and they apply to measures on arbitrary compact and connected manifolds. 3. We demonstrate the practical effectiveness of our method, both in tracking moving distributions on a sphere, as well as in a real large-scale Bayesian inference task. | Title: Parallel Streaming Wasserstein Barycenters
Comments:
- This paper presents a new method for performing low-communication parallel inference via computing the Wasserstein barycenter of a set of distributions. Unlike previous work, this method aims to reduce certain approximations incurred by discretization. Theoretically, this paper gives results involving the rate of the convergence of the barycenter distance. Empirically, this paper shows results on a synthetic task involving a Von Mises distribution and on a logistic regression task.
- I feel that the clarity of writing in this paper is not great. It would be better to clearly (and near the beginning of the paper) give an intuition behind the methodology improvements that this paper aims to provide, relative to previous work on computing the Wasserstein barycenter. This paper quickly dives into the algorithm and theory details (“Background”, “Mathematical preliminaries”, “Deriving the optimization problem”), without giving a clear treatment of previous work, and how this methods of this paper differ from this work. It is therefore is hard to see where the material developed in previous work ends and the new methodology of this paper begins. A simple description (early on) of this method, how it differs from existing methods, and why it solves the problem inherent in these existing methods, would greatly increase the clarity of this paper.
- Furthermore, it would be nice to include more motivation behind the main theoretical results that are proved in this paper. It is hard to get a grasp on the usefulness and contribution of these results without some discussion or reasoning on why one would like to prove these results (e.g. the benefits of proving this theory).
- Finally, I do not think that the empirical results in this paper are particularly thorough. The results in Table 1 are straightforward, but these seem to be the only empirical argument of the paper, and they are quite minimal. |
nips_2017_164 | Parametric Simplex Method for Sparse Learning
High dimensional sparse learning has imposed a great computational challenge to large scale data analysis. In this paper, we are interested in a broad class of sparse learning approaches formulated as linear programs parametrized by a regularization factor, and solve them by the parametric simplex method (PSM). Our parametric simplex method offers significant advantages over other competing methods: (1) PSM naturally obtains the complete solution path for all values of the regularization parameter; (2) PSM provides a high precision dual certificate stopping criterion; (3) PSM yields sparse solutions through very few iterations, and the solution sparsity significantly reduces the computational cost per iteration. Particularly, we demonstrate the superiority of PSM over various sparse learning approaches, including Dantzig selector for sparse linear regression, LAD-Lasso for sparse robust linear regression, CLIME for sparse precision matrix estimation, sparse differential network estimation, and sparse Linear Programming Discriminant (LPD) analysis. We then provide sufficient conditions under which PSM always outputs sparse solutions such that its computational performance can be significantly boosted. Thorough numerical experiments are provided to demonstrate the outstanding performance of the PSM method. | This paper extends simplex algorithm to several sparse learning problem with regularization parameter. The proposed method can collect all the solutions (corresponding to different values of the regularization parameter) in the process of simplex algorithm. It is an efficient way to get the sparse solution path and avoid tuning the regularization parameter. The connection between path Dantzig selector formulation and sensitivity analysis looks interesting to me.
Major comments:
- The method used in this paper seems closely related to the sensitivity analysis of LP. What is the key difference? It looks like just an application of sensitivity analysis.
- The paper mentioned that the number of iterations is linear in the number of nonzero variables empirically. Is there any guarantee for such linear dependency?
Experiment:
- Table 1 is not necessary since PSM never violate the constraints.
- Simplex method is not very efficient for large scale LP. I did not see comparison with interior based approaches or primal dual approaches. Authors are expected to make such comparison to experiments more convincing.
Typo/question:
line 122: z^+, z^- => t^+, t^-
line 291: solutio => solution
line 195: what is the relation between the feasibility condition and the sign of b and c?
line 201: \bar{x}_B and \bar{z}_N are always positive? (otherwise how to always guarantee the feasibility when \lambda is large)
line 207: here we see an upper-bound for \lambda, it contradicts the claim that the feasibility will be guaranteed when \lambda is large enough. In addition, is it possible that \lambda_min > \lambda_max?
line 236: can we guarantee that there is no cycle in such searching? |
nips_2017_2513 | Accelerated Stochastic Greedy Coordinate Descent by Soft Thresholding Projection onto Simplex
In this paper we study the well-known greedy coordinate descent (GCD) algorithm to solve 1 -regularized problems and improve GCD by the two popular strategies: Nesterov's acceleration and stochastic optimization. Firstly, based on an 1 -norm square approximation, we propose a new rule for greedy selection which is nontrivial to solve but convex; then an efficient algorithm called "SOft ThreshOlding PrOjection (SOTOPO)" is proposed to exactly solve an 1 -regularized 1 -norm square approximation problem, which is induced by the new rule. Based on the new rule and the SOTOPO algorithm, the Nesterov's acceleration and stochastic optimization strategies are then successfully applied to the GCD algorithm. The resulted algorithm called accelerated stochastic greedy coordinate descent (ASGCD) has the optimal convergence rate O( 1/ ); meanwhile, it reduces the iteration complexity of greedy selection up to a factor of sample size. Both theoretically and empirically, we show that ASGCD has better performance for high-dimensional and dense problems with sparse solutions. | Paper Summary:
The main idea is that Nesterov's acceleration method's and Stochastic Gradient Descent's (SGD) advantages are used to solve sparse and dense optimization problems with high-dimensions by using an improved GCD (Greedy Coordinate Descent) algorithm. First, by using a greedy rule, an $l_1$-square-regularized approximate optimization problem (find a solution close to $x^*$ within a neighborhood $\epsilon$) can be reformulated as a convex but non-trivial to solve problem. Then, the same problem is solved as an exact problem by using the SOTOPO algorithm. Finally, the solution is improved by using both the convergence rate advantage of Nesterov's method and the "reduced-by-one-sample" complexity of SGD. The resulted algorithm is an improved GCD (ASGCD=Accelerated Stochastic Greedy Coordinate Descent) with a convergence rate of $O(\sqrt{1/\epsilon})$ and complexity reduced-by-one-sample compared to the vanilla GCD.
Originality of the paper:
The SOTOPO algorithm proposed, takes advantage of the l1 regularization term to investigate the potential values of the sub-gradient directions and sorts them to find the optimal direction without having to calculate the full gradient beforehand. The combination of Nesterov's advantage with SGC advantage and the GCD advantage is less impressive. Bonus for making an efficient and rigorous algorithm despite the many pieces that had to be put together
Contribution:
-Reduces complexity and increases convergence rate for large-scale, dense, convex optimization problems with sparse solutions (+),
-Uses existing results known to improve performance and combines them to generate a more efficient algorithm (+),
-Proposes a criterion to reduce the complexity by identifying the non-zero directions of descent and sorting them to find the optimal direction faster (+),
-Full computation of the gradient beforehand is not necessary in the proposed algorithm (+),
-There is no theoretical way proposed for the choice of the regularization parameter $\lambda$ as a function of the batch size. The choice of $\lambda$ seems to affect the performance of the ASGCD in both batch choice cases (-).
Technical Soundness:
-All proofs to Lemmas, Corollaries, Theorems and Propositions used are provided in the supplementary material (+),
-Derivations are rigorous enough and solid. In some derivations further reference to basic optimization theorems or Lemmas could be more en-lighting to non-optimization related researchers (-).
Implementation of Idea:
The algorithm is complicated to implement (especially the SOTOPO part).
Clarity of presentation:
-Overall presentation of the paper is detailed but the reader is not helped to keep in mind the bigger picture (might be lost in the details). Perhaps reminders of the goal/purpose of each step throughout the paper would help the reader understand why each step is necessary(-),
-Regarding the order of application of different known algorithms or parts of them to the problem: it is explained but could be more clear with a diagram or pseudo-code (-),
-Notation: in equation 3, $g$ is not clearly explained and in Algorithm 1 there are two typos in referencing equations (-),
-For the difficulty of writing such a mathematically incremental paper, the clarity is at descent (+).
Theoretical basis:
-All Lemmas and transformations are proved thoroughly in the supplementary material (+),
-Some literature results related to convergence rate or complexity of known algorithms are not referenced (lines 24,25,60,143 and 73 was not explained until equation 16 which brings some confusion initially). Remark 1 could have been referenced/justified so that it does not look completely arbitrary (-),
-A comparison of the theoretical solution accuracy with the other pre-existing methods would be interesting to the readers (-),
-In the supplementary material in line 344, a $d \theta_t$ is missing from one of the integrals (-).
Empirical/Experimental basis:
-The experimental results verify the performance of the proposed algorithm with respect to the ones chosen for comparison. Consistency in the data sets used between the different algorithms, supports a valid experimental analysis (+),
-A choice of better smoothing constant $T_1$ is provided in line 208 (+) but please make it more clear to the reader why this is a better option in the case of $b=n$ batch size (-),
-The proposed method is under-performing (when the batch size is 1) compared to Katyusha for small regularization $10^{-6}$ and for the test case Mnist while for Gisette it is comparable to Katyusha. There might be room for improvement in these cases or if not it would be interesting to show which regularization value is the threshold and why. The latter means that the algorithm proposed is more efficient for large-scale problems with potentially a threshold in sparsity (minimum regularization parameter) that the authors have not theoretically explored. Moreover, there seems to be a connection between the batch size (1 or n, in other words stochastic or deterministic case) and the choice of regularization value that makes the ASGCD outperform other methods which is not discussed (-).
Interest to NIPS audience [YES]: This paper compares the proposed algorithm with well-established algorithms or performance improvement schemes and therefore would be interesting to the NIPS audience. Interesting discussion might arise related to whether or not the algorithm can be simplified without compromising it's performance. |
nips_2017_2512 | Working hard to know your neighbor's margins: Local descriptor learning loss
We introduce a loss for metric learning, which is inspired by the Lowe's matching criterion for SIFT. We show that the proposed loss, that maximizes the distance between the closest positive and closest negative example in the batch, is better than complex regularization methods; it works well for both shallow and deep convolution network architectures. Applying the novel loss to the L2Net CNN architecture results in a compact descriptor named HardNet. It has the same dimensionality as SIFT (128) and shows state-of-art performance in wide baseline stereo, patch verification and instance retrieval benchmarks.
best of our knowledge, no work in local descriptor learning fully mimics such strategy as the learning objective.
Simonyan and Zisserman [20] proposed a simple filter plus pooling scheme learned with convex optimization to replace the hand-crafted filters and poolings in SIFT. Han et al. [14] proposed a twostage siamese architecture -for embedding and for two-patch similarity. The latter network improved matching performance, but prevented the use of fast approximate nearest neighbor algorithms like kd-tree [21]. Zagoruyko and Komodakis [15] have independently presented similar siamese-based method which explored different convolutional architectures. Simo-Serra et al [22] harnessed hardnegative mining with a relative shallow architecture that exploited pair-based similarity.
The three following papers have most closedly followed the classical SIFT matching scheme. Balntas et al [23] used a triplet margin loss and a triplet distance loss, with random sampling of the patch triplets. They show the superiority of the triplet-based architecture over a pair based. Although, unlike SIFT matching or our work, they sampled negatives randomly. Choy et al [7] calculate the distance matrix for mining positive as well as negative examples, followed by pairwise contrastive loss.
Tian et al [24] use n matching pairs in batch for generating n 2 − n negative samples and require that the distance to the ground truth matchings is minimum in each row and column. No other constraint on the distance or distance ratio is enforced. Instead, they propose a penalty for the correlation of the descriptor dimensions and adopt deep supervision [25] by using intermediate feature maps for matching. Given the state-of-art performance, we have adopted the L2Net [24] architecture as base for our descriptor. We show that it is possible to learn even more powerful descriptor with significantly simpler learning objective without need of the two auxiliary loss terms. | The paper presents a variant of patch descriptor learning using neural networks and a triplet loss. While many similar approaches exist, the particular variant proposed here appears to have better results in a large number of benchmarks.
Still, I am really unsure about the technical contribution as the paper is not very clear on this point. The approach appears to be very similar to others such as Balntas 16 that also used triplet losses and deep nets to learn patch descriptors. It seems that the main difference is to consider in the loss hard negative examples. However, Balntas 16 *also* uses hard negatives in the triplets (not merely random samples as argued here on line 46). So what is the difference that changes the empirical results so much?
The paper generally needs far more polish. Experiments should carefully describe the difference between different approaches (e.g. descriptor dimensionally, neural network architecture, training set, and anything else that could make a difference). Then, the main reason for the observed empirical boost should be unequivocally identified through empirical assessment. For example, if the claim is that the key is to pool hard negatives, a carefully ablation study comparing this to alternative tripled-formation strategies should be included. Part of such a study may be included in the HardTFeat vs TFeat experiment of Fig. 7, but I would liket to check with the authors that they ran both methods using the same code base and only changing the feature sampling strategy. If more than one thing changes at a time, is difficult to reach a conclusion.
If the authors could clearly identify an interesting reason explaining their empirical boost (e.g. they may conclude that the key is a new and better way of doing hard negative mining compared to what has been done so far), then the paper could be interesting enough for acceptance. If, however, the boost is due to other differences such as more tuning or a tweaked architecture, then there would be much less of a reason to accept this into NIPS.
Rebuttal: the authors provided an informative rebuttal.
One of my key question was about the difference with Blantas 16. The authors argue that Blantas 16 does *not* do hard negative mining. On re-reading Blantas 16, this is partially true: their point is that a simplified form of hard negative mining (called in-triplet hard negatives) is just as effective and in fact superior to hard negative mining, tested in e.g. "Discriminative learning of deep convolutional feature point descriptors".
Given the new experiments in the rebuttal, it seems that the main idea here is a new variant of negative mining, within each batch rather than in the whole datasets as done in "Discriminative learning of deep convolutional feature point descriptors", which seems to be similar to the HNM method described in the rebuttal.
Hence to me the main message of the paper is that, while HB and HNM are very simple and very similar approaches, HB is in practice far better than HNM. The authors should modify the paper to include a careful experimental analysis of this point, extending and consolidating the new experiments in the rebuttal.
With this and other promised improvements, the paper would be good enough for acceptance in my opinion. However, the modifications required from the submitted version are fairly large. |
nips_2017_3315 | Practical Hash Functions for Similarity Estimation and Dimensionality Reduction
Hashing is a basic tool for dimensionality reduction employed in several aspects of machine learning. However, the perfomance analysis is often carried out under the abstract assumption that a truly random unit cost hash function is used, without concern for which concrete hash function is employed. The concrete hash function may work fine on sufficiently random input. The question is if they can be trusted in the real world where they may be faced with more structured input. In this paper we focus on two prominent applications of hashing, namely similarity estimation with the one permutation hashing (OPH) scheme of Li et al. [NIPS'12] and feature hashing (FH) of Weinberger et al. [ICML'09], both of which have found numerous applications, i.e. in approximate near-neighbour search with LSH and large-scale classification with SVM. We consider the recent mixed tabulation hash function of Dahlgaard et al. [FOCS'15] which was proved theoretically to perform like a truly random hash function in many applications, including the above OPH. Here we first show improved concentration bounds for FH with truly random hashing and then argue that mixed tabulation performs similar when the input vectors are not too dense. Our main contribution, however, is an experimental comparison of different hashing schemes when used inside FH, OPH, and LSH. We find that mixed tabulation hashing is almost as fast as the classic multiply-modprime scheme (ax + b) mod p. Mutiply-mod-prime is guaranteed to work well on sufficiently random data, but here we demonstrate that in the above applications, it can lead to bias and poor concentration on both real-world and synthetic data. We also compare with the very popular MurmurHash3, which has no proven guarantees. Mixed tabulation and MurmurHash3 both perform similar to truly random hashing in our experiments. However, mixed tabulation was 40% faster than MurmurHash3, and it has the proven guarantee of good performance (like fully random) on all possible input making it more reliable. | The main contribution of the paper is the set of empirical comparisons of various hashing functions, mutiply, 2-universal (2U), Murmur, and mixed tabulation, for (1) similarity estimation and (2) LSH. This type of empical evaluations is very important and will benefit practioners. Overall, this is a good paper. For improving the paper, I would suggest the authors to take into consideration of the following comments (some of which might be crucial):
---1. There is already a study of using 2U and 4U hashing for similarity estimation and classification tasks. See
[1] b-Bit Minwise Hashing in Practice. Proceedings of the 5th Asia-Pacific Symposium on Internetware, 2013.
[2] https://arxiv.org/abs/1205.2958, which is a more detailed version of [1], as far as I can tell.
[1,2] used 2U and 4U for evalating both b-bit minwise hashing and FH (although they used VW to refer to FH). For example, [1,2] showed that for dense sets, 2U and 4U had an obvious bias and larger MSE.
I would suggest the authors to cite one of [1,2] and comment on the additional contributions beyond [1,2].
---2. [1,2] already showed that for classification tasks, 2U and 4U seems to be sufficiently accurate compared to using fully random hash functions. This paper ommited classification experiments. It is not at all surprising that mixed tabulation will work as well as fully random hash for classification, which the NIPS community would probably care more (than LSH).
---3. This paper used 2U instead of 4U. It would be more coninvincing to present the results using 4U (or both 2U and 4U) as it is well-understood that in some cases 4U is indeed better.
---4. The LSH experiments can have been presented more convincingly.
a) In practice, we typically must guaranttee a good recall, even at the cost of retrieving more points. therefore, while the presentation of using the ratio # retrieved / recall is useful, it may cover important details.
b) only mulply hash and mix tabulation results are presented. It is important to also present fully random results and 2U/4U hashing results.
c) If the results from different hash functions as shown in Figure 5 really differ that much, then it is not that appropriate to compare the results at a fixed K and L, because each hash function may work better with a particular parameter.
d) The LSH parameters are probably chosen not so appropriately. The paper says it followed [30] for the parameters K,L by using K ={8,10,12} and L={8,10,12}. However, [30] actually used for K = {6, 8, 10, 12} and L = {4, 8, 16, 32, 64, 128}
e) LSH-OPH with K = {8,10,12} is likely too large. L =[8,10,12} is likely too small, according to prior experience. If the parameters are not (close to) optimal, then it is difficult to judge usefulness of the experiment results.
-----------
In Summary, overall, the topic of this paper is very important. The limitation of the current submission includes i) a lack of citation of the prior work [1,2] and explanation on the additional contribution beyond [1,2]. ii) the classification experiments are missing. It is very possible that no essential difference will be observed for any hash function, as concluded in the prior work [1,2]. iii) the LSH experiments may have issues and the conclusions drawn from the LSH experiments are hard to judge. |
nips_2017_299 | On the Model Shrinkage Effect of Gamma Process Edge Partition Models Iku Ohama
The edge partition model (EPM) is a fundamental Bayesian nonparametric model for extracting an overlapping structure from binary matrix. The EPM adopts a gamma process (ΓP) prior to automatically shrink the number of active atoms. However, we empirically found that the model shrinkage of the EPM does not typically work appropriately and leads to an overfitted solution. An analysis of the expectation of the EPM's intensity function suggested that the gamma priors for the EPM hyperparameters disturb the model shrinkage effect of the internal ΓP. In order to ensure that the model shrinkage effect of the EPM works in an appropriate manner, we proposed two novel generative constructions of the EPM: CEPM incorporating constrained gamma priors, and DEPM incorporating Dirichlet priors instead of the gamma priors. Furthermore, all DEPM's model parameters including the infinite atoms of the ΓP prior could be marginalized out, and thus it was possible to derive a truly infinite DEPM (IDEPM) that can be efficiently inferred using a collapsed Gibbs sampler. We experimentally confirmed that the model shrinkage of the proposed models works well and that the IDEPM indicated state-of-the-art performance in generalization ability, link prediction accuracy, mixing efficiency, and convergence speed. | This paper presents several variants of GP-EPM to solve the shrinkage problem of GP-EPM for modelling relational data. The proposed models are demonstrated to have better link prediction performance than GP-EPM by estimating the number of latent communities better.
Note I always point out this problem to my students as: the model is
"non-identifiable". Don't get me wrong, the whole body of work,
EPM, etc., is fabulous, but identifiability was always an important lesson in class.
The paper is well-written and easy to follow. The proposed models are well-motivated empirically with synthetic examples and then with theoretical analysis. To me, the main contribution of the paper is it points out that the obvious unidentifiability issue in GP-EPM can, interestingly, be a problem in some real cases. The proposed CEPM is just a naive model, while DEPM is a simple but intuitive solution and DEPM to IDEPM is like mapping the Dirichlet distribution to Dirichlet process. The solutions are not novel but straightforward to the problem. Finally the experimental results support the main claims, which makes it an interesting paper.
Note, in 4.3 (line 187), you say "one remarkable property". This *should* be well known in the non-parametric community. The result is perhaps the simplest case of the main theorem in "Poisson Latent Feature Calculus for Generalized Indian Buffet Processes", Lancelot James, 2014. The terms in $K_+$ in (5) are the marginal for the gamma process. The theorem can be pretty much written down without derivation using standard results.
This is a good paper cleaning up the obvious flaws in EPM and its implementation. It covers good though routine theory, and experimental work. Section 4.3 should be rewritten.
However, I belief the important issue of identifiability should be better discussed and more experiments done. For instance, a reviewer points out that the problem is partially overcome more recently using stronger priors. Clearly, identifiability is something we require to make theory easier and priors more interpretable. It is not strictly necessary, though it is routinely expected in the statistics community.
Anyway, would be good to see more discussion of the issues and further experimental investigation. |
nips_2017_833 | Learning to Inpaint for Image Compression
We study the design of deep architectures for lossy image compression. We present two architectural recipes in the context of multi-stage progressive encoders and empirically demonstrate their importance on compression performance. Specifically, we show that: (a) predicting the original image data from residuals in a multi-stage progressive architecture facilitates learning and leads to improved performance at approximating the original content and (b) learning to inpaint (from neighboring image pixels) before performing compression reduces the amount of information that must be stored to achieve a high-quality approximation. Incorporating these design choices in a baseline progressive encoder yields an average reduction of over 60% in file size with similar quality compared to the original residual encoder. | This paper proposes a progressive image compression method that's "hybrid". The authors use the framework of Toderici et al (2016) to setup a basic progressive encoder, and then they improve on it by studying how to better propagate information between iterations. The solution involves using "temporal" residual connections, without the explicit need to have an RNN per se (though this point is a bit debatable because in theory if the residual connections are transformed by some convolution, are they acting as an additive RNN or not?). However, the authors also employ a predictor (inpainter). this allows them to encode each patch after trying to predict an "inpainted" version first. It is important to note here that this will only work (in practice) if all the patches on which this patch depends on have been decoded. This introduces a linear dependency on patches, which may make the method too slow in practice, and it would be nice to see a bit more in the text about this issue (maybe some timing in formation vs. not using inpainting).
Overall, I think the paper was well written and an expert should be able to reproduce the work.
Given that the field of neural image compression is still in its infancy and that most of the recent papers have been focusing on non-progressive methods, and this paper proposes a *progressive* encoder/decoder, I think we should seriously consider accepting it. |
nips_2017_2328 | The Expxorcist: Nonparametric Graphical Models Via Conditional Exponential Densities
Non-parametric multivariate density estimation faces strong statistical and computational bottlenecks, and the more practical approaches impose near-parametric assumptions on the form of the density functions. In this paper, we leverage recent developments to propose a class of non-parametric models which have very attractive computational and statistical properties. Our approach relies on the simple function space assumption that the conditional distribution of each variable conditioned on the other variables has a non-parametric exponential family form. | This paper proposes a method to estimate a joint multivariate density non-parametrically by estimating a product of marginal conditional exponential densities. The marginal distribution of each variable is conditioned on the neighbors of that variable in a graph. The authors first discuss how their work relate to the literature. They study the consistency of their approach in terms of the estimated marginals with respect to a joint distribution, by relying on a theorem in ref. 28. They also develop an estimation algorithm, study statistical properties of their approach and discuss the relationships with copulas. The paper ends with experimental results. The paper is well written.
1) Are there guarantees if the real distribution does not belong to the family of distribution estimated?
2) I am not very familiar with nonparametric density estimation. How practical is the assumption that the base measures are known?
3) The experiments seem well conducted an the baselines selected in a sensible manner. The evaluation is performed both on synthetic and real data. On synthetic data, the approach is only evaluated on models matching the assumption that the factors are at most of size 2. Imho, testing the approach on more general model would help understand its usefulness.
4) l119: why doesn't nonparametric estimation benefit from estimating a joint distribution as a product of conditional marginal distribution?
5) How easy/hard would it be to extend your approach to cliques of size greater than 2?
6) Will you make your code available?
7) Based on the definition of the density, shouldn't the edges of the graph E mentioned in the introduction correspond to non-independence rather than independence?
Typos:
You use both "nonparametric" and "non-parametric". |
nips_2017_1863 | Adaptive SVRG Methods under Error Bound Conditions with Unknown Growth Parameter
Error bound, an inherent property of an optimization problem, has recently revived in the development of algorithms with improved global convergence without strong convexity. The most studied error bound is the quadratic error bound, which generalizes strong convexity and is satisfied by a large family of machine learning problems. Quadratic error bound have been leveraged to achieve linear convergence in many first-order methods including the stochastic variance reduced gradient (SVRG) method, which is one of the most important stochastic optimization methods in machine learning. However, the studies along this direction face the critical issue that the algorithms must depend on an unknown growth parameter (a generalization of strong convexity modulus) in the error bound. This parameter is difficult to estimate exactly and the algorithms choosing this parameter heuristically do not have theoretical convergence guarantee. To address this issue, we propose novel SVRG methods that automatically search for this unknown parameter on the fly of optimization while still obtain almost the same convergence rate as when this parameter is known. We also analyze the convergence property of SVRG methods under Hölderian error bound, which generalizes the quadratic error bound. | This paper introduces a restarting scheme for SVRG specialised to the case where the underlying problem satisfies the quadratic error bound (QEB) condition, a weaker condition than strong convexity that still allows for linear convergence. The algorithm solves the problem of having to know the value c of the QEB constant before hand.
The restarting scheme applies full SVRG repeatedly in a loop. If after the application of SVRG the resulting iterate is less than a 25% relative improvement (according to a particular notion of solution quality they state), the value of c is increased by \sqrt{2}, so that the next run of SVRG uses double the number of iterations in it's inner loop (T propto c^2).
The use of doubling schemes for estimating constants in optimisation algorithms is a very standard technique. It's use with SVRG feels like only an incremental improvement. A particular problem with such techniques is setting the initial value of the constant. Too small a value will result in 5-10 wasted epochs, where as too large a value results in very slow convergence.
The experiments section is well-explained, with standard test problems used. I do have some issues with the number of steps shown on the plots. The x axis #grad/n goes to 1000 on each, which is unrealistically large. It masks the performance of the algorithm during the early iterations. In the SVRG and SAGA papers the x axis spans from 0 to 30-100 of #grad/n, since this represents the range where the loss bottoms out on held-out tests sets. It's hard to tell from the plots as they are at the moment if the algorithm works well where it matters. This would be much clearer if error on a test set was shown as well.
On the first plot, a comparison is made against SVRG with a fixed number of inner iterations T, for a variety of T. The largest T tried had the best performance, I would suggest adding larger values to the plot to make clear that using too large a value results in worse performance (i.e. there is a sweet spot). I would remove the T=100 from the plot to make room.
In terms of language, the paper is ok, although in a number of places the language is a little informal. There are a moderate number of grammatical issues. The paper does require additional proof reading to get it up to standard. |
nips_2017_478 | Improved Dynamic Regret for Non-degenerate Functions
Recently, there has been a growing research interest in the analysis of dynamic regret, which measures the performance of an online learner against a sequence of local minimizers. By exploiting the strong convexity, previous studies have shown that the dynamic regret can be upper bounded by the path-length of the comparator sequence. In this paper, we illustrate that the dynamic regret can be further improved by allowing the learner to query the gradient of the function multiple times, and meanwhile the strong convexity can be weakened to other non-degenerate conditions. Specifically, we introduce the squared path-length, which could be much smaller than the path-length, as a new regularity of the comparator sequence. When multiple gradients are accessible to the learner, we first demonstrate that the dynamic regret of strongly convex functions can be upper bounded by the minimum of the path-length and the squared path-length. We then extend our theoretical guarantee to functions that are semi-strongly convex or selfconcordant. To the best of our knowledge, this is the first time that semi-strong convexity and self-concordance are utilized to tighten the dynamic regret. | In this paper, the authors study the problem of minimizing the dynamic regret in online learning. First, they introduce squared path-length to measure the complexity of the comparator sequence. Then, they demonstrate that if multiple gradients are accessible to the learner, the dynamic regret of strongly convex functions can be upper bounded by the minimum of the path-length and the squared path-length. Finally, they extend their theoretical guarantees to functions that are semi-strongly convex or self-concordant.
This is a theoretical paper for analyzing the dynamic regret. The main difference from previous work is that the learner is able to query the gradient multiple times. The authors prove that under this feedback model, dynamic regret could be upper bounded by the minimum of the path-length and the squared path-length, which is a significant improvement when the squared path-length is small.
Strong Points:
1. A new performance measure is introduced to bound the dynamic regret.
2. When functions are strongly convex, the authors develop a new optimization algorithm and prove its dynamic regret is upper bounded by the minimum of the path-length and the squared path-length.
3. This is the first time that semi-strong convexity and self-concordance are utilized to tighten the dynamic regret.
Suggestions/Questions:
1. It is better to provide some empirical studies to support the theoretical results.
2. For self-concordant functions, why do we need an additional condition in (11)?
3. Due to the matrix inverse, the complexity of online multiple Newton update (OMNU) is much higher than online multiple gradient descent (OMGD), which should be mentioned explicitly. |
nips_2017_70 | Dual-Agent GANs for Photorealistic and Identity Preserving Profile Face Synthesis
Synthesizing realistic profile faces is promising for more efficiently training deep pose-invariant models for large-scale unconstrained face recognition, by populating samples with extreme poses and avoiding tedious annotations. However, learning from synthetic faces may not achieve the desired performance due to the discrepancy between distributions of the synthetic and real face images. To narrow this gap, we propose a Dual-Agent Generative Adversarial Network (DA-GAN) model, which can improve the realism of a face simulator's output using unlabeled real faces, while preserving the identity information during the realism refinement. The dual agents are specifically designed for distinguishing real v.s. fake and identities simultaneously. In particular, we employ an off-the-shelf 3D face model as a simulator to generate profile face images with varying poses. DA-GAN leverages a fully convolutional network as the generator to generate high-resolution images and an auto-encoder as the discriminator with the dual agents. Besides the novel architecture, we make several key modifications to the standard GAN to preserve pose and texture, preserve identity and stabilize training process: (i) a pose perception loss; (ii) an identity perception loss; (iii) an adversarial loss with a boundary equilibrium regularization term. Experimental results show that DA-GAN not only presents compelling perceptual results but also significantly outperforms state-of-the-arts on the large-scale and challenging NIST IJB-A unconstrained face recognition benchmark. In addition, the proposed DA-GAN is also promising as a new approach for solving generic transfer learning problems more effectively. DA-GAN is the foundation of our submissions to NIST IJB-A 2017 face recognition competitions, where we won the 1 st places on the tracks of verification and identification. | This work uses GANs to generate synthetic data to use for supervised training of facial recognition systems. More specifically, they use an image-to-image GAN to improve the quality of faces generated by a face simulator. The simulator is able to produce a wider range of face poses for a given face, and the GAN is able to refine the simulators output such that it is more closely aligned with the true distribution of faces (i.e. improve the realism of the generated face) while maintaining the facial identity and pose the simulator outputted. They show that by fine tuning a facial recognition system on this additional synthetic data they are able to improve performance and outperform previous state of the art methods.
Pros:
- This method is simple, apparently effective and is a nice use of GANs for a practical task. The paper is clearly written
Cons:
- My main concern with this paper is regarding the way in which the method is presented. The authors term their approach "Dual Agent GANS" and seem to claim a novel GAN architecture. However, it is not clear to me what aspect of their GAN is particularly new. The "dual agent"aspect of their GAN comes from the fact that they have a standard adversarial term (in their case the BE-GAN formulation) plus a cross entropy term to ensure the facial identity is preserved. But previous work (e.g. InfoGAN, Auxiliary classifier GAN) have both also utilized a combination of "heads". So it seems odd to me that the authors are pushing this work as a new GAN architecture/method. I realize it's very trendy these days to come up with a slightly new GAN architecture and give it a new cool name, but this obfuscates the contributions. I think this is an interesting paper from perspective of using GANs in a data augmentation pipeline (and certainly their particular formulation is tailored to the task at hand) but I do not like that the authors appear to be claiming a new GAN method.
- Since I think the main contribution of this paper is a data augmentation technique for facial recognition systems, it'd be good to see > 1 dataset explored.
Some additional comments/questions:
- In eq. 8, do you mean to have a minus sign in the L_G term?
- What was the performance of the network you trained before fine tuning? i.e. how much improvement comes from this technique vs. different/better architectures/hyper-parameters/etc. compared to other methods |
nips_2017_972 | Learning Affinity via Spatial Propagation Networks
In this paper, we propose spatial propagation networks for learning the affinity matrix for vision tasks. We show that by constructing a row/column linear propagation model, the spatially varying transformation matrix exactly constitutes an affinity matrix that models dense, global pairwise relationships of an image. Specifically, we develop a three-way connection for the linear propagation model, which (a) formulates a sparse transformation matrix, where all elements can be outputs from a deep CNN, but (b) results in a dense affinity matrix that effectively models any task-specific pairwise similarity matrix. Instead of designing the similarity kernels according to image features of two points, we can directly output all the similarities in a purely data-driven manner. The spatial propagation network is a generic framework that can be applied to many affinity-related tasks, such as image matting, segmentation and colorization, to name a few. Essentially, the model can learn semantically-aware affinity values for high-level vision tasks due to the powerful learning capability of deep CNNs. We validate the framework on the task of refinement of image segmentation boundaries. Experiments on the HELEN face parsing and PASCAL VOC-2012 semantic segmentation tasks show that the spatial propagation network provides a general, effective and efficient solution for generating high-quality segmentation results. | The paper describes a method for learning pairwise affinities for recurrent label refinement in deep networks. A typical application is as follows: a feature map is produced by a convolutional network and is then refined by additional layers that in effect pass messages between pixels. The weights for such message passing are often set using hand-defined feature spaces (although prior work on learning such weights exists, see below). The submission describes a formulation for learning such weights.
The paper has a number of issues that lead me to recommend rejection:
1. The general problem tackled in this paper -- refining poorly localized segmentation boundaries -- has been tackled in many publications. Two representative approaches are: (a) add layers that model mean field inference in a dense CRF and train them jointly with the initial segmentation network (as in [1,13,30]); (b) add a convolutional refinement module, such as the context module in [Multi-Scale Context Aggregation by Dilated Convolutions, ICLR 2016], also trained jointly with the segmentation network. The submission should provide controlled experiments that compare the presented approach to this prior work, but it doesn't. An attempt is made in Table 1, but it is deeply flawed. As far as I can tell, the dense CRF is not trained end-to-end with the segmentation network, as commonly done in the literature, such as [1,13,30]. And the context module (the ICLR 2016 work referred to above) is not compared to at all, even though it was developed for this specific purpose and is known to yield good results. (In fact, the ICLR 2016 paper reports refinement results on the VOC 2012 dataset with the VGG network that are better than the SPN results in Table 1 (IoU of 73.9 in the "Front + Large + RNN" condition in Table 3 of the ICLR 2016 paper). And that's a comparatively old work by now.)
2. There is other related work in the literature that specifically addresses learning affinities for label refinement. This work is closely related but is not cited, discussed, or compared to:
-Semantic Segmentation with Boundary Neural Fields. Gedas Bertasius, Jianbo Shi and Lorenzo Torresani. CVPR 2016
- Convolutional Random Walk Networks for Semantic Image Segmentation. Gedas Bertasius, Lorenzo Torresani, Stella X. Yu, Jianbo Shi. CVPR 2017
- Learning Dense Convolutional Embeddings for Semantic Segmentation. Adam W. Harley, Konstantinos G. Derpanis, Iasonas Kokkinos. ICLR Workshop 2016
3. The results on VOC 2012 are well below the current state of the art, which stands at 86% IoU (compared to 79.8% in the submission). One could argue that the authors are more interested in evaluating the contribution of their refinement approach when added to some baseline segmentation networks, but (a) such controlled evaluation was not done properly (see point (1) above); and (b) the authors' combined approach is quite elaborate, so it would be hard to claim that it is somehow much simpler than state-of-the-art networks that dominate the leaderboard. With this level of complexity, it is reasonable to ask for state-of-the-art performance.
Minor comment:
- Since the dense CRF seems to play an important role in the submission, as a baseline that is repeatedly compared against, the submission should cite the dense CRF paper: [Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials, NIPS 2011]. |
nips_2017_2851 | Semi-supervised Learning with GANs: Manifold Invariance with Improved Inference
Semi-supervised learning methods using Generative adversarial networks (GANs) have shown promising empirical success recently. Most of these methods use a shared discriminator/classifier which discriminates real examples from fake while also predicting the class label. Motivated by the ability of the GANs generator to capture the data manifold well, we propose to estimate the tangent space to the data manifold using GANs and employ it to inject invariances into the classifier. In the process, we propose enhancements over existing methods for learning the inverse mapping (i.e., the encoder) which greatly improves in terms of semantic similarity of the reconstructed sample with the input sample. We observe considerable empirical gains in semi-supervised learning over baselines, particularly in the cases when the number of labeled examples is low. We also provide insights into how fake examples influence the semi-supervised learning procedure. | Chiefly theoretical work with some empirical results on SVHN and CIFAR10. This paper proposes using a trained GAN to estimate mappings from and to the true data distribution around a data point, and use a kind of neural PCA to estimate tangents to those estimates, then used for training a manifold-invariant classifier. Some additional work investigating regular GAN vs feature matching GAN is briefly presented, and an augmentation to BiGAN.
It feels a bit like this is two works squeezed into one. Maybe three.
There is an exploration of FM-GAN vs a regular GAN training objective, with some nice results shown where the classification entropy (confidence of the classification, as I read it) is much better for an FM-GAN than for a regular GAN.
There is an Augmented BiGAN which achieves nicely lower classification error than BiGAN on infer-latents-then-generate g(h(x)).
The most substantial work presented here is the manifold-invariance. The first thing I wrestle with is that the method is a bit complex, making it probably tricky for others to get right, and complex/fiddly to implement. In particular, 2.1.2 proposes to freeze f, g, and h, and introduce p and pbar as a nonlinear approximation to SVD. This introduces the second thing I wrestle with: numerous layered approximations. The method requires g and h to be good approximations to generate and infer to and from the data manifold. The results (e.g. figure 2) do not indicate that these approximations are always very good. The method requires that p and pbar reasonably capture singular directions in the latent space, but figure 3 shows this approximation only sort-of holds. This has me wondering about transferability, and wondering how to measure each of the approximation errors to evaluate whether the method is useful for a dataset. The CIFAR-10 results reinforce my concern.
The MNIST result in line 264 (0.86) is quite good for semi-supervised. What is the difference between the results in 264-265 vs the results in Table 1? Different numbers are given for SVHN in each location, yet line 251 suggests the results in both locations are semi-sup. Work would feel more complete with comparisons on semi-sup MNIST in Table 1, then you could show ladder, CAE, MTC, etc. I'm guessing you're up against space constraints here...
Table 1 missing some competitive results. A quick Google search for svhn semi supervised gives https://arxiv.org/abs/1610.02242 showing 7.05% for SVHN with 500 labels, 5.43% with 1000 labels; 16.55% CIFAR10@4k. https://papers.nips.cc/paper/6333-regularization-with-stochastic-transformations-and-perturbations-for-deep-semi-supervised-learning.pdf reports 6.03% on SVHN with 1% of labels (~700).
Missing conclusion/future directions.
Minor nitpicks:
grammar of lines 8-9 needs work
grammar of lines 42-43
lines 72/73 have two citations in a row
lines 88/89 unnecessary line break
line 152 wrong R symbol
line 204 comma before however
lne 295 'do not get as better' grammar needs work
All in all, the results are OK-to-good but not universally winning. The topic is of interest to the community, including a few novel ideas. (The paper perhaps should be multiple works.) It looks like a new SOTA is presented for SVHN semisup, which tips me toward accept. |
nips_2017_2634 | Perturbative Black Box Variational Inference
Black box variational inference (BBVI) with reparameterization gradients triggered the exploration of divergence measures other than the Kullback-Leibler (KL) divergence, such as alpha divergences. In this paper, we view BBVI with generalized divergences as a form of estimating the marginal likelihood via biased importance sampling. The choice of divergence determines a bias-variance trade-off between the tightness of a bound on the marginal likelihood (low bias) and the variance of its gradient estimators. Drawing on variational perturbation theory of statistical physics, we use these insights to construct a family of new variational bounds. Enumerated by an odd integer order K, this family captures the standard KL bound for K = 1, and converges to the exact marginal likelihood as K → ∞. Compared to alpha-divergences, our reparameterization gradients have a lower variance. We show in experiments on Gaussian Processes and Variational Autoencoders that the new bounds are more mass covering, and that the resulting posterior covariances are closer to the true posterior and lead to higher likelihoods on held-out data. | Summary:
The authors present a new variational objective for approximate Bayesian inference. The variational objective is nicely framed as an interpolation between classic importance sampling and the traditional ELBO-based variational inference. Properties of the variance of importance sampling estimator and ELBO estimators are studied and leveraged to create a better marginal likelihood bound with tractable variance properties. The new bound is based on a low-degree polynomial of the log-importance weight (termed the interaction energy). The traditional ELBO estimator is expressed as a first order polynomial in their more general framework.
The authors then test out the idea on a Gaussian process regression problem and a Variational autoencoder.
Quality: I enjoyed this paper --- I thought the idea was original, the presentation is clear, and the experiments were convincing. The paper appears to be technically correct, and the method itself appears to be effective.
Clarity: This paper was a pleasure to read. Not only was this paper extremely clear and well written, the authors very nicely frame their work in the context of other current and previous research.
Originality: While establishing new lower bounds on the marginal likelihood is a common subject at this point, the authors manage to approach this with originality.
Impact: I think this paper has the potential to be highly impactful --- the spectrum drawn from importance sampling to KLVI is an effective way of framing these ideas for future research.
Questions/Comments:
- Figure 1: should the legend text "KLVI: f(x) = 1 + log(x)" read "f(x) = log(x)" ? I believe that would cohere with the bullet point on line 124.
- How does the variance of the marginal likelihood bound estimators relate to the variance of the gradients of those estimators wrt variational params? KLVI reparameterization gradients can have some unintuitive irreducibility (Roeder et al, https://arxiv.org/abs/1703.09194); is this the case for PVI? |
nips_2017_717 | Union of Intersections (UoI) for Interpretable Data Driven Discovery and Prediction
The increasing size and complexity of scientific data could dramatically enhance discovery and prediction for basic scientific applications. Realizing this potential, however, requires novel statistical analysis methods that are both interpretable and predictive. We introduce Union of Intersections (UoI), a flexible, modular, and scalable framework for enhanced model selection and estimation. Methods based on UoI perform model selection and model estimation through intersection and union operations, respectively. We show that UoI-based methods achieve low-variance and nearly unbiased estimation of a small number of interpretable features, while maintaining high-quality prediction accuracy. We perform extensive numerical investigation to evaluate a UoI algorithm (U oI Lasso ) on synthetic and real data. In doing so, we demonstrate the extraction of interpretable functional networks from human electrophysiology recordings as well as accurate prediction of phenotypes from genotype-phenotype data with reduced features. We also show (with the U oI L1Logistic and U oI CU R variants of the basic framework) improved prediction parsimony for classification and matrix factorization on several benchmark biomedical data sets. These results suggest that methods based on the UoI framework could improve interpretation and prediction in data-driven discovery across scientific fields. | This paper focuses on model selection and, to some extent, feature selection in large datasets with many features, of which only a small subset are assumed to be necessary for accurate prediction. The authors propose a general method by which model selection is performed by way of feature compression performed by taking the intersection of a multiple regularization parameters in an ensemble method, and then model estimation by taking a union over multiple outputs. (This is also the major contribution of the paper.) A second contribution is found in the union operation in a model averaging step with a boosting/bagging flavor. Overall, I found the paper's method section well written and the idea proposed to be complete. The paper's experimental section was difficult to follow, but the results do seem to support the framework. One major missing part of the paper is a reasonable discussion of using the framework beyond a Lasso base. Are there reasons why this method would not work for classification? Are there potential hitches to using this method with already-ensemble-based methods like random forests? While there are many uses for the UoI with a Lasso base already, it was increase the general interest of the framework if UoI could be placed in the more general ML space. |
nips_2017_2107 | Conic Scan-and-Cover algorithms for nonparametric topic modeling
We propose new algorithms for topic modeling when the number of topics is unknown. Our approach relies on an analysis of the concentration of mass and angular geometry of the topic simplex, a convex polytope constructed by taking the convex hull of vertices representing the latent topics. Our algorithms are shown in practice to have accuracy comparable to a Gibbs sampler in terms of topic estimation, which requires the number of topics be given. Moreover, they are one of the fastest among several state of the art parametric techniques.
1 Statistical consistency of our estimator is established under some conditions. | * Summary
This paper introduces a novel algorithm, Conic Scan Coverage, that is
based on convex geometry ideas and can perform non-parametric topic
modeling. The algorithm is intuitively based on the idea of covering
the topic simplex with cones. The papers presents the results of an
experimental evaluation and the supplementary contains detailed proofs
and example topics inferred by the algorithm.
* Evaluation
I have very mixed feelings about this paper.
On the one hand, I must say I had a lot of fun reading it. I am really
fond of the convex geometric approach to topic modeling, it is such an
original and potentially promising approach. Indeed, the results are
very good, and the topics presented in the supplementary material look
great.
On the other hand, I have a lot of uncertainty about this
paper. First, I must confess I do not understand the paper as well as
I would like. Of course, it could mean that I should have tried
harder, or maybe that the authors should explain more. Probably a bit
of both. Second, the experiments are weak. However, the paper is so
original that I don't think it should be rejected just because of
this.
To summarize:
+ Impressive results: excellent runtime and statistical performance, good looking topics
- Weak experimental evaluation
+ Very original approach to topic modeling
- Algorithm 1 seems very ad-hoc, and justification seems insufficient to me
Could you please show me a few example of inferred topic distribution?
* Discussion
- The experimental evaluation could be much better.
The python implementation of Gibbs-LDA is pretty bad, it makes for a
very poor baseline. You really need to say somewhere how many
iterations were run.
When comparing against algorithms like LDA Gibbs or HDP Gibbs, as in
table 1, you can't report a single number. You should plot the
evolution of perplexity by iteration. For large datasets, Gibbs LDA
can reach a good perplexity and coherence in surprisingly few
iterations. Also, there should be some error bars, a number in
itself is not very convincing.
You should also share information about hyperparameter settings. For
instance, HDP and LDA can exhibit very different perplexity for
different values of their alpha and beta hyperparameters.
Finally, you should also vary the vocabulary size. I did note that V
is very small, and it is worrysome. There are been several
algorithms proposed in the past that seemed very effective for
unrealistic small vocabulary sizes and didn't scale well to this
parameter.
- The algorithm seems very ad-hoc
I am surprised that you choose a topic as the furthest document and
then remove all the documents within some cosine distance. Why makes
such a farthest document a good topic representative? Also, if we
remove the documents from the active sets based solely on one topic,
are we really trying to explain the documents as mixture of topics?
I would be really curious to see a few documents and their inferred
topic distributions to see if it is interpretable.
- What good is Theorem 4?
Theorem 4 is very interesting and definitely provides some good
justification for the algorithm. However, it assumes that the number
of topics K is known and doesn't say much about why such a procedure
should find a good number of topics. Indeed, in the end, I don't
have much intuition about what exactly drives the choice of number
of topics. |
nips_2017_1460 | Fisher GAN
Generative Adversarial Networks (GANs) are powerful models for learning complex distributions. Stable training of GANs has been addressed in many recent works which explore different metrics between distributions. In this paper we introduce Fisher GAN which fits within the Integral Probability Metrics (IPM) framework for training GANs. Fisher GAN defines a critic with a data dependent constraint on its second order moments. We show in this paper that Fisher GAN allows for stable and time efficient training that does not compromise the capacity of the critic, and does not need data independent constraints such as weight clipping. We analyze our Fisher IPM theoretically and provide an algorithm based on Augmented Lagrangian for Fisher GAN. We validate our claims on both image sample generation and semi-supervised classification using Fisher GAN. | This paper proposes a new criterion for training a Generative Adversarial Network and shows that this new criterion yields stability benefits.
The criterion is related to Fisher discriminant analysis and is essentially a normalized IPM.
The authors show that this criterion is equivalent to the symmetric chi-squared divergence.
One reason for not being fully enthusiastic about this paper is that the fact that the proposed criterion is equivalent to chi-squared, which is
an f-divergence, reduces the novelty of the approach and raises several questions that are not addressed:
- is the proposed implementation significantly different than what one would obtain by applying f-GAN (with the appropriate f corresponding
to the chi-squared divergence)? if not, then the algorithm is really just f-GAN with chi-squared divergence and the novelty is drastically reduced. If yes,
then it would be great to pinpoint the differences and explain why they are important or would significantly address the issues that are inherent
to classical f-divergences based GANs in terms of training stability.
- how do the results compare if one uses other f-divergences (for example non-symmetric chi-squared but other f-divergences as well)?
- because the criterion is an f-divergence, it may suffer from the same issues that were pointed out in the WGAN paper: gradients would vanish
for distributions with disjoint support. Is there a reason why chi-squared would not have the same issues as Jensen-Shannon or KL ? Or is
it that the proposed implementation, not being exactly written as chi-squared and only equivalent at the optimum in lambda, doesn't suffer from
these issues?
Regarding the experimental results:
The DCGAN baseline for inceptions scores (Figure 4) seem lower than what was reported in earlier papers and the current state of the art is more around 8
(not sure what it was at the time of the writing of this paper though).
The SSL results (Table 2) are ok when compared to the weight clipping implementation of WGAN but this is not necessarily the right baseline to compare with,
and compared to other algorithms, the results are not competitive.
My overall feeling is that this paper presents an interesting connection between the chi-squared divergence and the normalized IPM coefficient which
deserves further investigation (especially a better understanding of how is this connected to f-GAN with chi-squared). But the comparison with the
known suboptimal implementation of WGAN (with weight clipping) is not so interesting and the results are not really convincing either.
So overall I think the paper is below the acceptance bar in its current form. |
nips_2017_2384 | Simple Strategies for Recovering Inner Products from Coarsely Quantized Random Projections
Random projections have been increasingly adopted for a diverse set of tasks in machine learning involving dimensionality reduction. One specific line of research on this topic has investigated the use of quantization subsequent to projection with the aim of additional data compression. Motivated by applications in nearest neighbor search and linear learning, we revisit the problem of recovering inner products (respectively cosine similarities) in such setting. We show that even under coarse scalar quantization with 3 to 5 bits per projection, the loss in accuracy tends to range from "negligible" to "moderate". One implication is that in most scenarios of practical interest, there is no need for a sophisticated recovery approach like maximum likelihood estimation as considered in previous work on the subject. What we propose herein also yields considerable improvements in terms of accuracy over the Hamming distance-based approach in Li et al. (ICML 2014) which is comparable in terms of simplicity. | ********************************
* Summary *
********************************
The paper investigates theoretically and empirically different strategies for recovery of inner products using quantized random projections of data instances. Random projections are often used in learning tasks involving dimensionality reduction. The goal of the additional quantization step is data compression that allows for a reduction in space complexity of learning algorithms and more efficient communication in distributed settings.
********************************
* Theoretical contributions *
********************************
The main focus of the paper is on studying a linear strategy for recovery of inner products from quantized random projections of the data. The strategy approximates the inner product between two instances from the instance space with the inner product of the corresponding quantized random projections divided by the dimension of the projection space. The main theoretical contribution is a bound on the bias of such approximations (Theorem 1). In addition to this strategy, the paper considers recovery of inner products from random projections that are normalized (i.e., having unit norm) prior to quantization. For such approximations, the paper expresses the bias and variance in terms of the relevant terms of the linear strategy for recovery of inner products (Proposition 1). The paper also provides a bound on the variance of recovery of inner products with values close to one and strategies based on quantization with finitely many bits (Theorem 2).
********************************
* Quantization *
********************************
The quantization of random projections is performed using the Lloyd-Max quantizer. The method resembles one dimensional K-means clustering where interval end-points determine the clusters and the centroid is given as the conditional expectation of the standard normal random variable given the interval, i.e., c_k = E [ z | z \in [t_k, t_{k+1}) ], where c_k is the cluster centroid or quantization value, t_k and t_{k + 1} are interval end-points of the kth interval, and the total number of intervals K is given with the number of quantization bits B = 1 + \log_2 K.
********************************
* Empirical study *
********************************
# Figure 2
The goal of this experiment is to numerically verify the tightness of the bound on the bias of the linear strategy for recovery of inner products from quantized random projections. The figure shows that the bound is tight and already for the recovery with 5-bit quantization the bound almost exactly matches the bias. The paper also hypothesizes that the variance of the linear strategy with finite bit-quantization is upper bounded by the variance of the same strategy without quantization. The provided empirical result is in line with the hypothesis.
# Figure 3
The experiment evaluates the mean squared error of the recovery of inner products using the linear strategy as the number of quantization bits and dimension of the projection space change. The plot indicates that quantization with four bits and a few thousand of random projections might suffice for a satisfactory recovery of inner products.
# Figure 4
- In the first experiment, the normalized strategy for recovery of inner products from quantized random projections is compared to the linear one. The plot (left) indicates that a better bias can be obtained using the normalized strategy.
- In the second experiment, the variance of the quantized normalized strategy is compared to that without quantization. The plot (middle) indicates that already for quantization with 3 bits the variance is very close to the asymptotic case (i.e., infinitely many bits and no quantization).
- The third experiment compares the mean squared error of the normalized strategy for recovery of inner products from quantized random projections to that of the collision strategy. While the collision strategy performs better for recovery of inner products with values close to one, the normalized strategy is better globally.
# Figure 5
The experiment evaluates the strategies for recovery of inner products on classification tasks. In the first step the random projections are quantized and inner products are approximated giving rise to a kernel matrix. The kernel matrix is then passed to LIBSVM that trains a classifier. The provided empirical results show that the quantization with four bits is capable of generating an approximation to kernel matrix for which the classification accuracy matches that obtained using random projections without quantization. The plots depict the influence of the number of projections and the SVM hyperparameter on the accuracy for several high-dimensional datasets. The third column of plots in this figure also demonstrates that the normalized strategy for recovery of inner products from quantized random projections is better on classification tasks than the competing collision strategy.
********************************
* Theorem 1 *
********************************
Please correct me if I misunderstood parts of the proof.
# Appendix B: Eq. (4) --> Bound
Combining Eq. (6) with Eq. (4) it follows that
E[\rho_{lin}] - \rho = -2 \rho D_b + E[(Q(Z) - Z)(Q(Z') - Z')] >= -2 \rho D_b ==> 2 \rho D_b >= \rho - E[\rho_{lin}] .
To be able to square the latter without changing the inequality one needs to establish that \rho - E[\rho_{lin}] >= 0. Otherwise, it is possible that | \rho - E[\rho_{lin}] | > 2 \rho D_b and \rho - E[\rho_{lin}] < 0.
# Appendix B: Eq. (6)
- A proof of the left-hand side inequality is incomplete, E[(Q(Z) - Z)(Q(Z') - Z')] >= 0. At the moment the term is just expanded and it is claimed that the expansion is a fact. If so, please provide a reference for this result. Otherwise, an explanation is needed for why it holds that E[ Z Q(Z') ] - E[ Q(Z) Q(Z') ] <= E[ Z Z'] - E[ Z Q(Z') ].
- For the proof of the right-hand side inequality, it is not clear why it holds that E[ Z Z' ] + E[ Q(Z) Q(Z') ] - 2 E[ Z Q(Z') ] <= E[ Z Z' ] - E[ Z Q(Z') ].
### My decision is conditional on these remarks being addressed properly during the rebuttal phase. ###
********************************
* Minor comments *
********************************
- line 79: bracket is missing ( || z ||^2 + || z' ||^2 ) / k
- Appendix A, Eq. (2): the notation is not introduced properly |
nips_2017_668 | Differentiable Learning of Submodular Models
Can we incorporate discrete optimization algorithms within modern machine learning models? For example, is it possible to incorporate in deep architectures a layer whose output is the minimal cut of a parametrized graph? Given that these models are trained end-to-end by leveraging gradient information, the introduction of such layers seems very challenging due to their non-continuous output. In this paper we focus on the problem of submodular minimization, for which we show that such layers are indeed possible. The key idea is that we can continuously relax the output without sacrificing guarantees. We provide an easily computable approximation to the Jacobian complemented with a complete theoretical analysis. Finally, these contributions let us experimentally learn probabilistic log-supermodular models via a bi-level variational inference formulation. | This paper proposes a way to differentiate the process of submodular function minimization thus enabling to use these functionals as layers in neural networks. The key insight of the paper consists in the usage of the interpretation of discrete optimization of submodular functions as continuous optimization. As a concrete example the paper studies the CRF for image segmentation and creates and the graphcut layer. This layer is evaluated on the Weizmann dataset for horse segmentation and is reported to bring some improvements.
I generally like the paper very much, find the description of the method clear enough. In particular, I liked the short introduction into submodular functions and their connection to min-norm-point.
I have some comments that might allow to increase the impact of the paper. My comments mostly cover experimental evaluation and related works.
1. The experimental evaluation presented in Section 6 is a bit disappointing to a practitioner. The results are clearly far below state-of-the-art in image segmentation. To demonstrate the full potential of the new layer, I would recommend to plug the new layer into one of the state-of-the-art systems for image segmentation such as DeepLab (Chen et al., DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs, 2016). I understand that the graphcut layer is applicable only to binary problems, but it would be very interesting to try it e.g. on one class of PASCAL VOC.
2. In particular, there is an extension of DeepLab (Chandra and Kokkinos, Fast, Exact and Multi-Scale Inference for Semantic Image Segmentation with Deep Gaussian CRFs, ECCV 2016 ) that puts a Gaussian CRF on top of the potentials learned by CNNs. They make the inference layer differentiable by connection using the fact that inference in Gaussian CRFs reduces to solving systems of linear equations. It would be very interesting to see how the graph layer compares against the Gaussian CRF layer.
3. It would make sense to mention another line of works that incorporate discrete optimization into networks. In particular, if the final loss can be computed directly from the results of discrete optimization (for example, when the whole system is trained with a structured SVM objective: Jaderberg et al., Deep structured output learning for unconstrained text recognition, ICLR 2015; Vu et al., Context-aware CNNs for person head detection, ICCV 2015). Comparison to this approach can also strengthen the paper.
4. As mentioned in line 65 a popular way of embedding algorithms into neural networks consists in unrolling a fixed number of steps of iterative algorithms into layers of neural network. This paper uses one of such iterative algorithms (a total variation solver) to do inference, so it would make sense to simply backprop through several iterations of it. Again, comparison to this approach would strengthen the paper.
5. Are there any other examples (more rich than graphcuts) of the cases when minimization of submodular functions can be plugged into neural networks? Even naming such cases together with the appropriate submodular solvers would be very valuable.
6. In terms of the potentials learned by the graphcut layer, it would be very interesting to visualize what the network has learned and, e.g., compare those with the standard potentials based on the gradient of the image.
Minor comments:
- Line 130 says "the Lovasz extension is linear in O", but O is a set and it is clear what the phrase means.
- Line 135. [27] looks like the wrong reference
- Line 143. The definition of optimal partition is never given, so it remains unclear what it is.
- Line 295 says that only a small fraction of labelled pixels was used for training. It is not clear why this is done. |
nips_2017_2079 | Visual Reference Resolution using Attention Memory for Visual Dialog
Visual dialog is a task of answering a series of inter-dependent questions given an input image, and often requires to resolve visual references among the questions. This problem is different from visual question answering (VQA), which relies on spatial attention (a.k.a. visual grounding) estimated from an image and question pair. We propose a novel attention mechanism that exploits visual attentions in the past to resolve the current reference in the visual dialog scenario. The proposed model is equipped with an associative attention memory storing a sequence of previous (attention, key) pairs. From this memory, the model retrieves the previous attention, taking into account recency, which is most relevant for the current question, in order to resolve potentially ambiguous references. The model then merges the retrieved attention with a tentative one to obtain the final attention for the current question; specifically, we use dynamic parameter prediction to combine the two attentions conditioned on the question. Through extensive experiments on a new synthetic visual dialog dataset, we show that our model significantly outperforms the state-of-the-art (by ≈ 16 % points) in situations, where visual reference resolution plays an important role. Moreover, the proposed model achieves superior performance (≈ 2 % points improvement) in the Visual Dialog dataset [1], despite having significantly fewer parameters than the baselines. | This paper proposed a visual reference resolution model for visual dialog. The authors proposed to attentions, 1: tentative attention that only consider current question and history, and 2: relevant attention that retrieved from an associate attention memory. Two attentions are further combined with a dynamic parameter layer from [9] and predict the final attention on image. The authors create MNIST Dialog synthetic dataset which model the visual reference resolution of ambiguous expressions. The proposed method outperform baseline with large margin. The authors also perform experiments on visual dialog dataset, and show improvements over the previous methods.
[Paper Strengths]
1: Visual reference resolution is a nice and intuitive idea on visual dialog dataset.
2: MNIST Dialog synthetic dataset is a plus.
3: Paper is well written and easy to follow.
[Paper Weaknesses]
My major concerns about this paper is the experiment on visual dialog dataset. The authors only show the proposed model's performance on discriminative setting without any ablation studies. There is not enough experiment result to show how the proposed model works on the real dataset. If possible, please answer my following questions in the rebuttal.
1: The authors claim their model can achieve superior performance having significantly fewer parameters than baseline [1]. This is mainly achieved by using a much smaller word embedding size and LSTM size. To me, it could be authors in [1] just test model with standard parameter setting. To backup this claim, is there any improvements when the proposed model use larger word embedding, and LSTM parameters?
2: There are two test settings in visual dialog, while the Table 1 only shows the result on discriminative setting. It's known that discriminative setting can not apply on real applications, what is the result on generative setting?
3: To further backup the proposed visual reference resolution model works in real dataset, please also conduct ablation study on visDial dataset. One experiment I'm really interested is the performance of ATT(+H) (in figure 4 left). What is the result if the proposed model didn't consider the relevant attention retrieval from the attention memory. |
nips_2017_3418 | Attend and Predict: Understanding Gene Regulation by Selective Attention on Chromatin
The past decade has seen a revolution in genomic technologies that enabled a flood of genome-wide profiling of chromatin marks. Recent literature tried to understand gene regulation by predicting gene expression from large-scale chromatin measurements. Two fundamental challenges exist for such learning tasks: (1) genome-wide chromatin signals are spatially structured, high-dimensional and highly modular; and (2) the core aim is to understand what the relevant factors are and how they work together. Previous studies either failed to model complex dependencies among input signals or relied on separate feature analysis to explain the decisions. This paper presents an attention-based deep learning approach, AttentiveChrome, that uses a unified architecture to model and to interpret dependencies among chromatin factors for controlling gene regulation. AttentiveChrome uses a hierarchy of multiple Long Short-Term Memory (LSTM) modules to encode the input signals and to model how various chromatin marks cooperate automatically. AttentiveChrome trains two levels of attention jointly with the target prediction, enabling it to attend differentially to relevant marks and to locate important positions per mark. We evaluate the model across 56 different cell types (tasks) in humans. Not only is the proposed architecture more accurate, but its attention scores provide a better interpretation than state-of-the-art feature visualization methods such as saliency maps.
DNA. These spatial re-arrangements result in certain DNA regions becoming accessible or restricted and therefore affecting expressions of genes in the neighborhood region. Researchers have established the "Histone Code Hypothesis" that explores the role of histone modifications in controlling gene regulation. Unlike genetic mutations, chromatin changes such as histone modifications are potentially reversible ( [5]). This crucial difference makes the understanding of how chromatin factors determine gene regulation even more impactful because this knowledge can help developing drugs targeting genetic diseases.
At the whole genome level, researchers are trying to chart the locations and intensities of all the chemical modifications, referred to as marks, over the chromatin 4 . Recent advances in nextgeneration sequencing have allowed biologists to profile a significant amount of gene expression and chromatin patterns as signals (or read counts) across many cell types covering the full human genome. These datasets have been made available through large-scale repositories, the latest being the Roadmap Epigenome Project (REMC, publicly available) ( [18]). REMC recently released 2,804 genome-wide datasets, among which 166 datasets are gene expression reads (RNA-Seq datasets) and the rest are signal reads of various chromatin marks across 100 different "normal" human cells/tissues [18].
The fundamental aim of processing and understanding this repository of "big" data is to understand gene regulation. For each cell type, we want to know which chromatin marks are the most important and how they work together in controlling gene expression. However, previous machine learning studies on this task either failed to model spatial dependencies among marks or required additional feature analysis to explain the predictions (Section 4). Computational tools should consider two important properties when modeling such data.
• First, signal reads for each mark are spatially structured and high-dimensional. For instance, to quantify the influence of a histone modification mark, learning methods typically need to use as input features all of the signals covering a DNA region of length 10, 000 base pair (bp) 5 centered at the transcription start site (TSS) of each gene. These signals are sequentially ordered along the genome direction. To develop "epigenetic" drugs, it is important to recognize how a chromatin mark's effect on regulation varies over different genomic locations.
• Second, various types of marks exist in human chromatin that can influence gene regulation. For example, each of the five standard histone proteins can be simultaneously modified at multiple different sites with various kinds of chemical modifications, resulting in a large number of different histone modification marks. For each mark, we build a feature vector representing its signals surrounding a gene's TSS position. When modeling genome-wide signal reads from multiple marks, learning algorithms should take into account the modular nature of such feature inputs, where each mark functions as a module. We want to understand how the interactions among these modules influence the prediction (gene expression).
In this paper we propose an attention-based deep learning model, AttentiveChrome, that learns to predict the expression of a gene from an input of histone modification signals covering the gene's neighboring DNA region. By using a hierarchy of multiple LSTM modules, AttentiveChrome can discover interactions among signals of each chromatin mark, and simultaneously learn complex dependencies among different marks. Two levels of "soft" attention mechanisms are trained, (1) to attend to the most relevant regions of a chromatin mark, and (2) to recognize and attend to the important marks. Through predicting and attending in one unified architecture, AttentiveChrome allows users to understand how chromatin marks control gene regulation in a cell. In summary, this work makes the following contributions:
• AttentiveChrome provides more accurate predictions than state-of-the-art baselines. Using datasets from REMC, we evaluate AttentiveChrome on 56 different cell types (tasks).
• We validate and compare interpretation scores using correlation to a new mark signal from REMC (not used in modeling). AttentiveChrome's attention scores provide a better interpretation than state-of-the-art methods for visualizing deep learning models. • AttentiveChrome can model highly modular inputs where each module is highly structured.
AttentiveChrome can explain its decisions by providing "what" and "where" the model has focused 4 In biology this field is called epigenetics. "Epi" in Greek means over. The epigenome in a cell is the set of chemical modifications over the chromatin that alter gene expression. on. This flexibility and interpretability make this model an ideal approach for many real-world applications.
• To the authors' best knowledge, AttentiveChrome is the first attention-based deep learning method for modeling data from molecular biology.
In the following sections, we denote vectors with bold font and matrices using capital letters. To simplify notation, we use "HM" as a short form for the term "histone modification". | The paper presents a novel method for predicting gene regulation by LSTM with an attention mechanism. The model consists of two levels, where the first level is applied on bins for each histone modifications (HM) and the second level is applied to multiple HMs. Attention mechanism is used in each level to focus on the important parts of the bins and HMs. In the experiments, the proposed method improves AUC scores over baseline models including CNN, LSTM, and CNN with an attention mechanism. This is an interesting paper which shows that LSTM with an attention mechanism can predict gene regulation.
1. It is unclear to me why the second level is modeled using LSTM because there is no ordering between HMs. Would it be reasonable to use a fully connected layer to model dependencies between HMs? Based on Table 1 and 2, the one level model (LSTM-\alpha) outperforms the two level model ((LSTM-\alpha,\beta) in many cases. It would be interesting to investigate how HMs are coupled with each other in the learned model.
2. In Section 2, an algorithm box that includes the equations for the entire model would be very helpful.
3. It is unclear to me if DeepChrome was compared with the proposed method in the experiments. It would be helpful to indicate which model corresponds to DeepChrome.
4. As baselines, it would be helpful to include non-neural network models such as SVM or logistic regression.
5. To allow researchers in biology to use this method, it would be very helpful to have the source code publicly available with URLs in the paper. |
nips_2017_95 | Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning
In reinforcement learning, agents learn by performing actions and observing their outcomes. Sometimes, it is desirable for a human operator to interrupt an agent in order to prevent dangerous situations from happening. Yet, as part of their learning process, agents may link these interruptions, that impact their reward, to specific states and deliberately avoid them. The situation is particularly challenging in a multi-agent context because agents might not only learn from their own past interruptions, but also from those of other agents. Orseau and Armstrong [16] defined safe interruptibility for one learner, but their work does not naturally extend to multi-agent systems. This paper introduces dynamic safe interruptibility, an alternative definition more suited to decentralized learning problems, and studies this notion in two learning frameworks: joint action learners and independent learners. We give realistic sufficient conditions on the learning algorithm to enable dynamic safe interruptibility in the case of joint action learners, yet show that these conditions are not sufficient for independent learners. We show however that if agents can detect interruptions, it is possible to prune the observations to ensure dynamic safe interruptibility even for independent learners. | This paper presents an extension of the safe interruptibility (SInt)
framework to the multi-agent case. The authors argue that the
original definition of safe interruptibility is difficult to use in
this case and give a more constrained/informed one called 'dynamic safe
interruptibility' (DSInt) based on whether the update rule depends
on the interruption probability. The joint action case is considered
first and it is shown that DSInt can be achieved. The case of
independent learners is then considered, with a first result showing
that independent Q-learners do not satisfy the conditions of the
definition of DSInt. The authors finally propose a model where the
agents are aware of each others interruptions, and interrupted
observations are pruned from the sequence, and claim that this model
verify the definition of DSInt.
The paper is mostly well-written, well motivated, offers novel ideas
and appears mostly technically correct. The level of formalism is
good, emphasizing on the ideas rather than rendering a boring
sequence of symbols, but this also hurts somewhat the proof reading
of the appendix, so I can't be 100% confident about the results.
* Main comments:
- What is the relation between the authors' definition of DSInt and
the original version? In particular, do we have DSInt implies SInt,
or the converse, or neither?
- In the proof of Lemma 1, p. 12, third equality, a condition on 'a'
is missing in the second P term. As far as I understand, this
condition cannot be removed, preventing this term from being pulled
out of the sum.
* Minor comments:
- The motivating example is pretty good. On a side note however, it
seems that in this scenario the knowledge of others' interruptions
may still be inadequate, as in the general case the different
vehicles may not even be of the same brand and thus may not exchange
information (unless some global norm is defined?). This is not so
much of a criticism, as one needs to start somewhere, but a
discussion about that could go in the conclusion.
- Definition 2: This should define "dynamic safe interruptibility",
not just "safe interruptibility" which is already defined in [16].
- The notation used for open intervals is at times standard (in
the appendix) and at other times non-standard (as in the main text).
This should be made homogeneous (preferably using the standard
notation).
- Def 5: "multi-agent systems" -> "multi-agent system"
- Proof of Thm 2: On the first equality in the display, we lost the
index (i).
- The second line after the display should probably read
\hat{Q^m_t} instead of Q^{(m)}_t.
- Missing period at the end of the proof.
- Proof of Thm 3: "two a and b" -> "two agents a and b" (or learner,
player?)
- \gamma is not defined AFAICT
- Proof of Thm 4, "same than" -> "same as"
- Does it mean that independent learners are DSInt but not SInt?
- Penultimate sentence of the conclusion, "amount exploration" -> +of
- p.12 after Lemma 2 "each agents do not" -> "each agent does" or
"no agent learns"
- last line of the display in the proof of Lemma 1: Use the \left[
and \right] commands (assuming LaTeX is being used) to format
properly.
- proof of Thm 4: q is undefined. |
nips_2017_1142 | Repeated Inverse Reinforcement Learning
We introduce a novel repeated Inverse Reinforcement Learning problem: the agent has to act on behalf of a human in a sequence of tasks and wishes to minimize the number of tasks that it surprises the human by acting suboptimally with respect to how the human would have acted. Each time the human is surprised, the agent is provided a demonstration of the desired behavior by the human. We formalize this problem, including how the sequence of tasks is chosen, in a few different ways and provide some foundational results. | ## Summary
This is a theoretical work in which the authors present a new problem domain called repeated inverse reinforcement learning (RIRL). This domain describes interactions between a human (expert) and the RIRL agent, whereby the agent repeatedly presents solutions to tasks (in the form of a discounted state occupancy) and the human responds with a demonstration if surprised by the (suboptimality) of the agent's choice. At each iteration the agent also potentially makes some choice about the domain, which can include the state transition structure, and/or the task specific part of the reward function. The goal is to determine the task independent reward function, which in real terms might refer to such things as norms, preferences and safety considerations. Towards the end of the paper, the authors relax the requirement for a full state occupancy to be presented, instead allowing trajectories to be presented. The solution is presented either as an optimal policy for any desired task, or by identifying the task independent reward function up to an equivalence class.
The main contributions of the paper are two algorithms and a series of convergence bounds proven for each setting. The algorithms making use of a 'volume reduction in ellipsoid' algorithm from the optimization literature, which reduces the space of possible reward functions to a minimal-volume enclosing ellipsoid. The paper is quite dense in theoretical results and pushes a number of proofs to appendices. Also, because of space considerations there are some very compact descriptions with high level intuition only. However, these results appear to be sound (to the best of my ability to judge) and to have general applicability in the newly defined problem domain. The authors could do a little more to provide both a concrete example to motivate the domain and some intuition to guide the reader (please see my comments below). Also, the tractability of this approach for 'interesting' domains is difficult to judge at this early theoretical stage.
## More detailed comments
# p2
# Authors use \Delta to represent a probability distribution over various sets, but don't define this notation anywhere, or scope its meaning (some authors forbid continuous sets, other authors forbid discrete distributions with zero components when using this notation).
# Or the identity function used for the state occupancy vector.
# p4, line 158
Each task is denoted as a pair (X, R).
# The authors do not describe the meaning of R in the general bandit description. From later text it becomes apparent, but it should be stated here explicitly.
# p5, lines 171-177
# Is this really a generalisation? It seems more like a constraint to me. It certainly implies a constraint on the relationship between rewards for two different states which share features. Extension/modification might be a better word than generalisation.
# It is also implied, but not stated that the policy now maps from features to actions, rather than from states to actions.
# p5 line 183, language
...for normalization purpose,
# for normalization purposes,
# p5, line 186, meaning
...also contains the formal protocol of the process.
# It is unclear what the authors mean by this.
# p5 line 197, clarity
Therefore, the update rule on Line 7...
# on Line 7 of Algorithm 1
# p5 line 205, clarity
...in the worst case...
# I think 'at it's smallest' (or similar) would be clearer. As this 'worst case' would represent the most accurate approximation to theta* possible after . A good thing.
# p5 line 206, clarity
To simplify calculation, we relax this l∞ ball to its inscribed l2 ball.
# This took me a little while to interpret. It would be clearer if the authors talked about the lower bound represented by the l∞ ball being relaxed, rather than the ball being relaxed.
# p6 line 208 and Equation below
# When the authors say 'The unit sphere', I guess they are now talking about the l2 norm ball. This should be explicit. As should the vol function representing the volume of an ellipsoid. Also some intuition could be given as to why the third part of the inequality is as it is. If I am right then C_d sqrt(d)^d is the volume of an l2 ball that contains \Theta0 and C_d (ε/4)^d is the volume of the aforementioned lower bound l2 ball.
# p7 meaning
We argue that this is not a reasonable protocol for two reasons: (1) in
expectation, the reward collected by the human may be less than that by the agent, which is due to us conditioning on the event that an error is spotted
# Is there a way of making this statement clearer.
# p7 clarity
demonstration were still given in state occupancy
# Is the meaning 'demonstration were still given in terms of a state occupancy vector'
# and '(hence μ t )' could be expanded to '(we call the resulting state occupancy μ t ).
# p8 discussion for Algorithm 2, intuition.
# The authors could give some intuition behind the construction of the batch state vectors \bar{Z} and \bar{Z}^* in the Algorithm. The former appears to be an unnormalised state occupancy following \pi_t = \pi_{t-n} whose initial state distribution is uniformly sampled from the states with mistakes in between iterations t-n and t. Likewise, the \bar{Z}* vector is an unnormalised sum of discounted state visits from the n demonstrations initiated by the mistakes. Is this correct? |
nips_2017_94 | Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent
We study the resilience to Byzantine failures of distributed implementations of Stochastic Gradient Descent (SGD). So far, distributed machine learning frameworks have largely ignored the possibility of failures, especially arbitrary (i.e., Byzantine) ones. Causes of failures include software bugs, network asynchrony, biases in local datasets, as well as attackers trying to compromise the entire system. Assuming a set of n workers, up to f being Byzantine, we ask how resilient can SGD be, without limiting the dimension, nor the size of the parameter space. We first show that no gradient aggregation rule based on a linear combination of the vectors proposed by the workers (i.e, current approaches) tolerates a single Byzantine failure. We then formulate a resilience property of the aggregation rule capturing the basic requirements to guarantee convergence despite f Byzantine workers. We propose Krum, an aggregation rule that satisfies our resilience property, which we argue is the first provably Byzantine-resilient algorithm for distributed SGD. We also report on experimental evaluations of Krum. | Intro:
The paper introduces a novel algorithm, called Krum, that combines partially calculated gradients in a Byzantine failure tolerant way. Such an algorithm is meant to be useful in distributed training of machine learning models on very large data sets.
I am a machine learner, but not a distributed computing expert. Hence, I review the manuscript from a customer perspective.
---
Strengths:
* The research question is very interesting. Distributed techniques start to play a more important role in machine learning as the scales of the data sets grow exponentially.
* The paper is written in a sophisticated academic language. It is also nicely structured.
* The proposed algorithm is supported with rigorous theoretical argumentations.
---
Weaknesses:
* Results in Figure 4 are not charming. When there are no Byzantine failures, the proposed algorithm lags far behind simple averaging. It starts to show its power only after one third of the workers are Byzantine. This looks like an extremely high failure rate. Apparently, averaging will remain to be extremely competitive when the failure rates are realistically low, making the motivation of the proposed approach questionable.
* While the paper introduces an idea that starts to become useful when the data set is dramatically large, the reported experiments are on extremely small data sets. MNIST has 60000 instances, spambase has 4601. The real effect of the divergence between the true and the stochastic gradient starts to become visible when the data set is large enough. With today's hardware technology, a modest workstation with a few-thousand-dollar GPU can be easily trained on MILLIONS of data points without any need for such protocols as Krum. Furthermore, whether more data points than a few million are needed depends totally on applications. In many, the performance difference between 10 Million and 100 Million is ignorably small. For a machine learner to be convinced to slow down her model for the sake of safe distributedness, the authors should pinpoint applications where there is a real need for this and report results on those very applications.
* The proposed algorithm is for updating global parameters from stochastic gradients calculated on minibatches. Although this approach is fundamental to many machine learning models, the central challenge of the contemporary techniques is different. Deep neural nets require distributed computing to distribute operations across "neurons" rather than minibatches. The fact that the proposed algorithm cannot be generalized to this scenario reduces the impact potential of this work significantly.
Minor point: The paper would also be stronger if it cited earlier pioneer work on distributed machine learning. One example is:
C.T. Chu et al., Map-Reduce for Machine Learning on Multicore, NIPS, 2007
---
Preliminary Evaluation:
While I appreciate the nice theoretical work behind this paper and that distributed machine learning is an issue of key importance, the reported results shed doubt on the usefulness of the proposed approach.
---
Final Evaluation:
I do acknowledge that 33% Byzantine failure rate is a standard test case for general distributed computing tasks, but here our concern is training a machine learning model. The dynamics are a lot different from "somehow" processing as large data bunches as possible. The top-priority issue is accuracy, not data size. According to Figure 4, Krum severely undermines the model accuracy if there is no attack. This literally means a machine learner will accept to use Krum only when she is ABSOLUTELY sure that i) a 10-20 million data point subset will not be sufficient for satisfactory accuracy (hence distributed computing is required), and ii) at least 33% of the nodes will act Byzantine (hence Krum is required). As a machine learner, I am trying hard but not managing to find out such a case. Essentially, it is not the readership's but the authors' duty to bring those cases to attention. This is missing in both the paper and the rebuttal. I keep my initial negative vote. |
nips_2017_760 | Sharpness, Restart and Acceleration
The Łojasiewicz inequality shows that sharpness bounds on the minimum of convex optimization problems hold almost generically. Sharpness directly controls the performance of restart schemes, as observed by Nemirovskii and Nesterov [1985]. The constants quantifying error bounds are of course unobservable, but we show that optimal restart strategies are robust, and searching for the best scheme only increases the complexity by a logarithmic factor compared to the optimal bound. Overall then, restart schemes generically accelerate accelerated methods. | This paper consider first-order algorithms for Holder-smooth convex optimization in the oracle model with an additional sharpness assumption, guaranteeing that, within a neighborhood of optimum, a reduction in objective value yields a reduction in distance from optimum. Recently, there has been growing interest in the algorithmic consequences of the presence of sharpness, particularly in the setting of alternating minimization and of compressed sensing.
Sharpness can be exploited to speed up the convergence of first-order methods, such as Nesterov's accelerated gradient descent, by appropriately restarting the algorithm after a certain number of iterations, possibly changing with the number of rounds. First, the authors provide asymptotically optimal restart schedules for this class of problem for given sharpness parameters mu and r. While this is interesting, the result is essentially the same as that appearing, in more obscure terms, in Nemirovski and Nesterov's original 1985 paper "Optimal methods of smooth convex optimization". See paragraph 5 of that paper.
More importantly, the authors show that a log-scale grid search can be performed to construct adaptive methods that work in settings when mu and r are unknown, which is typical in sharpness applications. This appears to be the main novel idea of the paper. From a theoretical point of view, I find this is to be a fairly straightforward observation. On the other hand, such observation may be important in practice. Indeed, the authors also show a small number of practical examples in the context of classification, in which the restart schedules significantly improve performance. At the same time, the fact that restarts can greatly help the convergence of accelerated methods has already been observed before (see O'Donoghue and Candes, as cited in the paper).
In conclusion, I find the paper interesting from a practical point of view and I wish that the authors had focused more on the empirical comparison of their restart schedule vs that of Nemirovski and Nesterov and others. From a theoretical point of view, my feeling is that the contribution is good but probably not good enough for NIPS. It might help if the authors, in their rebuttal, explained more clearly the relation of their non-adaptive bounds with those of Nemirovski and Nesterov. |
nips_2017_1886 | Dynamic-Depth Context Tree Weighting
Reinforcement learning (RL) in partially observable settings is challenging because the agent's observations are not Markov. Recently proposed methods can learn variable-order Markov models of the underlying process but have steep memory requirements and are sensitive to aliasing between observation histories due to sensor noise. This paper proposes dynamic-depth context tree weighting (D2-CTW), a model-learning method that addresses these limitations. D2-CTW dynamically expands a suffix tree while ensuring that the size of the model, but not its depth, remains bounded. We show that D2-CTW approximately matches the performance of state-of-the-art alternatives at stochastic time-series prediction while using at least an order of magnitude less memory. We also apply D2-CTW to model-based RL, showing that, on tasks that require memory of past observations, D2-CTW can learn without prior knowledge of a good state representation, or even the length of history upon which such a representation should depend. | The paper develops a variation on Context Tree Weighting (CTW) which keeps memory costs low by adapting the depth of each branch to the extent that it aids prediction accuracy. The new algorithm, called Utile Context Tree Weighting (UCTW), is shown empirically in some illustrative examples to use less memory than fixed-depth CTW (since it can keep some branches short) and to be more effective under a memory bound (in which it must prune a node every time it expands a node).
---Quality---
As far as I can tell the technical claims and formalization of the algorithm are sensible. The experiments are, for the most part well designed to answer the questions being asked.
One experiment that felt less well-posed was the T-Maze. The text says "We consider a maze of length 4. Thus we set K = 3." What does that "thus" mean? Is the implication that K = 3 should be deep enough to represent the environment? Later it says that "during the initial stages of learning the agent may need more than 3 steps to reach the goal." I assume that means that the agent might move up and down the "stem" of the T for a while, before reaching the goal, thus forgetting the initial observation if the suffix is limited to depth 3. If that's the case, then K = 3 is only sufficient to make predictions under the *optimal* policy, so it's no surprise that CTW+UCT can't perform well (UCT does random rollouts!). In fact, under those dynamics no finite suffix is enough to represent the environment (for arbitrary action sequences), so even the depth 4 model that UCTW learns is incorrect -- it just happens to be deep enough to be sufficiently robust to suboptimal behavior to allow the planner to work. I guess I'm just not entirely sure what to conclude from these results. We see that CTW does poorly when given inadequate depth (no surprise) and that UCTW adapts its depth, so that's fine. But UCTW doesn't learn a good model either, and it's basically a coincidence of the domain that it happens to work out for planning purposes. The other experiments, which are more focused on performance under memory bounds, make a lot more sense to me.
---Clarity---
I think the paper was pretty clearly written. The theoretical framework of CTW is always a challenge to present, and I think the authors have done pretty well. The main idea of the algorithm is described well at an intuitive level as well as at a formal level.
I will say this: the name of the algorithm is confusing. The "utile" in utile suffix memory refers to the fact that the tree is expanded based on *utility* (i.e. value). The main point of that work was that the tree should only be as complicated as it needs to be in order to solve the control task. Here the tree is being split based on prediction error of the next observation, not utility, so it is strange to call it Utile CTW. I saw the footnote acknowledging and clarifying this mismatch...but the fact that you had to write that footnote is a pretty good sign that the name is confusing! How about "Incremental Expansion CTW", "Dynamic Depth CTW", or "Memory Bounded CTW"? UCTW is just not descriptive of what the algorithm does....
---Originality---
Clearly the work is directly built upon existing results. However, I would say that it combines the ideas in a novel way. It re-purposes an alternative formulation of CTW in a clever way and develops the necessary updates to repair the tree after an expansion or pruning.
---Significance---
I think UCTW is interesting and may have a significant practical impact. CTW is an important algorithm in the compression literature and gaining interest in the AI/ML literature. I agree with the authors that memory is a major bottleneck when applying CTW to interesting problems, so a memory-bounded version is definitely of interest. Empirically UCTW shows promise -- though the experiments were performed on basic benchmarks they do demonstrate the UCTW uses less memory than fixed-depth CTW and can cope with a memory bound.
UCTW is a little bit of a strange beast, though. One of the appeals of CTW is that it has this very clear Bayesian interpretation of representing a distribution over all prunings of a tree. It's not at all clear what happens to that interpretation under UCTW. UCTW is *explicitly* expanding and pruning the tree using a statistical test rather than the posterior beliefs. The claim that UCTW will eventually do as well as fixed-depth CTW makes sense, and is comforting -- it eventually finds its way to the original Bayesian formulation and can overcome any funkiness in the initialization due to the process up until that point. Furthermore it's not clear what happens to the regret bounds that CTW enjoys once this expansion/pruning scheme is introduced. This is not really an objection -- sometimes some philosophical/mathematical purity must be sacrificed for the sake of practicality. But it does make things muddier and it is harder to interpret the relationship of this algorithm to other CTW variants.
Similarly, once we discard the clean interpretation of CTW, it does raise the question for me of why use CTW at all at this point? The authors raise the comparison to USM, but don't really compellingly answer the question "Why not just use USM?" The point is made that USM uses the K-L test, which is expensive, and doesn't have a memory bound. However, the main ideas used here (use likelihood ratio test instead and require a trade-off between expanding and pruning once the limit is hit) seem like they could just as easily be used in USM. I do no intend to suggest that the authors must invent a memory-bounded version of USM to compare to. However, if the authors do have a clear idea of why that's not a good idea, I think it would be valuable to discuss it. Otherwise I feel like the motivation of the work is a little bit incomplete.
***After Author Response***
I do think the name of the algorithm is misleading, and that leads to confusing comparisons too. For instance, in the author response the authors say "Also, the expansion tests in USM are performed over nonparametric representations of distributions over future reward, so the complexity of each test is a function of the sample size for each distribution." But, and I cannot stress this enough, *that is because USM is trying to predict value and UCTW is not.* They use different expansion tests because they address fundamentally different prediction problems. If one were to use a USM-like algorithm for predicting the next symbol from a finite alphabet, it would make perfect sense to represent the distribution using a histogram and use likelihood ratio tests instead of K-S; the complexity would be linear in the size of the alphabet, not the number of examples. USM uses K-S *because it is predicting a continuous value*. (In this case, I do nevertheless acknowledge that the CTW calculations have a nice side effect of making likelihood calculations efficient and thank the authors for that clarification).
I think this paper should be accepted, so, if that happens, obviously it will be up to the authors what they do with the title and the algorithm name and so on. My point is that the direct link between USM and UCTW is not sound -- USM and UCTW are solving different problems. Pretty much the *only* thing UCTW takes from USM is the fact that it incrementally grows its depth. So it's fine to draw this connection between two algorithms that incrementally expand a suffix tree, and its good to acknowledge inspiration, but they can't be directly compared. At best you can compare UCTW to a USM-like algorithm that predicts future symbols rather than utility, but then, because it has a different prediction problem, the design choices of USM might not make sense anymore. I think the name UCTW reinforces this flawed direct comparison because at first glance it implies that UCTW is solving the same problem as USM, and it is not. None of this is fatal; a motivated reader can untangle all of this. I just hope the authors will get really clear about the distinctions between these algorithms and then make sure the paper is as clear as it can possibly be.
I can see where the authors are coming from with the T-maze. I still think it's a bit of a wonky experiment, but adding a bit of the analysis given in the response to the paper would help a reader understand what the authors mean to extract from the results. |
nips_2017_1417 | Fast Black-box Variational Inference through Stochastic Trust-Region Optimization
We introduce TrustVI, a fast second-order algorithm for black-box variational inference based on trust-region optimization and the "reparameterization trick." At each iteration, TrustVI proposes and assesses a step based on minibatches of draws from the variational distribution. The algorithm provably converges to a stationary point. We implemented TrustVI in the Stan framework and compared it to two alternatives: Automatic Differentiation Variational Inference (ADVI) and Hessianfree Stochastic Gradient Variational Inference (HFSGVI). The former is based on stochastic first-order optimization. The latter uses second-order information, but lacks convergence guarantees. TrustVI typically converged at least one order of magnitude faster than ADVI, demonstrating the value of stochastic second-order information. TrustVI often found substantially better variational distributions than HFSGVI, demonstrating that our convergence theory can matter in practice. | SUMMARY OF THE PAPER:
The paper transfers concepts known in the optimization community and applies them to variational inference. It introduces a new method to optimize the stochastic objective function of black box variational inference. In contrast to the standard SGD, which optimizes the objective using only first order derivatives, the proposed methods takes estimates of the Hessian into account and approximates the objective function on a small trust region around the current iterate by a quadratic function. The algorithm automatically adapts the size of the trust region based on estimates of the quality of the quadratic model. The paper proves theoretically that the algorithm converges, and shows experimentally that convergence is typically faster than standard SGD.
GENERAL IMPRESSION / POSITIVE ASPECTS:
The paper is well written and the proposed algorithm is potentially highly relevant given its generality and the reported improvements in speed of convergence. The experimental part summarizes results from a very large number of experiments, albeit it is not clear to me whether this includes any experiments on large-scale datasets that require minibatch sampling.
WHAT COULD BE IMPROVED:
1. I appreciate the fact that the paper includes an unambiguous definition of the algorithm and a rigorous proof that the proposed algorithm converges. To make the method more accessible, it would however be helpful to also briefly explain the algorithm in more intuitive terms. For example, a geometric interpretation of \gamma, \lambda, \eta, and \delta_0 would help practitioners to choose good values for these hyperparameters.
2. An often cited advantage of VI is that it scales to very large models when minibatch sampling (stochastic VI) is used. Minibatch sampling introduces an additional source of stochasticity. The paper discusses only stochasticity due to the black box estimation of the expectation under q(z). How does the proposed algorithm scale to large models? Are the convergence guarantees still valid when minibatch sampling is used and do we still expect similar speedups?
3. It is not clear to me how $\sigma_k$, defined in Condition 4, can be obtained in practice.
MINOR REMARKS:
1. Algorithm 1: R^d should be R^D in the first line.
2. Lines 108-114 discuss a set of hyperparameters, whose motivation becomes clear to the reader only after reading the convergence proofs. It might be clearer to present the hyperparameters in a table with a row for each hyperparameter, and columns for the symbol, a short and descriptive name, and the interval of allowed values.
3. Eq. 7: which norm is used for the Hessian matrix here?
4. Line 154: It may be helpful to point out at this point that one draws *new* samples from p_0, i.e., one may not reuse samples that were used to generate g_k and H_k. Otherwise, $\ell'_{ki}$ are not i.i.d. The algorithm box states this clearly, but it was not immediately clear to me from the main text. |
nips_2017_1730 | Near Minimax Optimal Players for the Finite-Time 3-Expert Prediction Problem
We study minimax strategies for the online prediction problem with expert advice. It has been conjectured that a simple adversary strategy, called COMB, is near optimal in this game for any number of experts. Our results and new insights make progress in this direction by showing that, up to a small additive term, COMB is minimax optimal in the finite-time three expert problem. In addition, we provide for this setting a new near minimax optimal COMB-based learner. Prior to this work, in this problem, learners obtaining the optimal multiplicative constant in their regret rate were known only when K = 2 or K → ∞. We characterize, when K = 3, the regret of the game scaling as � 8/(9π)T ± log(T ) 2 which gives for the first time the optimal constant in the leading ( √ T ) term of the regret. | The paper studies the classic prediction with experts advice problem. There are a finite number k of experts and a finite number T of rounds. There is a player that makes sequential decisions for T rounds based on the advice of the k experts, and his goal is to minimize the maximum regret he can experience (minimax regret). Naturally, the optimal adversarial strategy is a key quantity to study here. This paper takes up the conjectured minimax optimal adversarial strategy called "Comb strategy" in the Gravin et al. paper and shows that it is indeed minimax optimal in the case of 3 experts. The high level proof structure is to show that the comb strategy is optimal assuming that in all the future rounds the comb strategy will be employed and then induct. (Comb strategy is not the exact optimal though --- thus one has to lose a bit in terms of optimality, and bound how the error accumulates as one inducts) One of the useful lemmas in doing this is the "exchangeability property", which shows that deviating from comb strategy in exactly one round will give a regret that is independent of which round one deviates off of it. The need for this property arises naturally when one takes the induction proof structure above. The proof of this property proceeds by considering another adversarial strategy twin-comb, also proved to be optimal for geometric horizon setting by Gravin et al., and then showing that playing Comb + non-comb is same as playing twin-comb + comb where non-comb is a mixture of {Comb, twin-comb}. The question is how to generalize this for arbitrary number of experts k. For general k, is it possible to push this idea through with the generalized twin-comb? And what would be the appropriate generalization? This is not immediately clear from the paper, but I still think the contribution as it is already meets the NIPS bar, and certainly has some neat ideas.
Given that this paper takes a conjectured optimal strategy, whose optimality for a small number of experts can be verified directly by writing a computer program to compute optimal adversarial values, its primary value can lie only in the new techniques it introduces. The main challenge is that since the comb adversary is not the exactly optimal adversary, all the nice properties of optimality are gone, and one has to control how the errors of this suboptimal strategy accumulate in order to establish approximate optimality. The "exchangeability" property is the main new idea for doing this. There is also the other nice contribution of proving that the player's algorithm that flows out of the comb adversary, which itself is not the exactly optimal but only approximately optimal, is an approximately optimal algorithm. Usually, when the adversary is not optimal, proving optimality for the algorithm that simulates it is not straight-forward. While the techniques don't immediately generalize, the general approach of controlling COMB adversary's errors with an exchangeability technique seems like the way to go for general k. Given this, and the fact that the work is technically interesting, I would say it meets the NIPS bar. |
nips_2017_375 | Multimodal Learning and Reasoning for Visual Question Answering
Reasoning about entities and their relationships from multimodal data is a key goal of Artificial General Intelligence. The visual question answering (VQA) problem is an excellent way to test such reasoning capabilities of an AI model and its multimodal representation learning. However, the current VQA models are oversimplified deep neural networks, comprised of a long short-term memory (LSTM) unit for question comprehension and a convolutional neural network (CNN) for learning single image representation. We argue that the single visual representation contains a limited and general information about the image contents and thus limits the model reasoning capabilities. In this work we introduce a modular neural network model that learns a multimodal and multifaceted representation of the image and the question. The proposed model learns to use the multimodal representation to reason about the image entities and achieves a new state-of-the-art performance on both VQA benchmark datasets, VQA v1.0 and v2.0, by a wide margin. | Summary
This paper proposes an approach that combines the output of different vision systems in a simple and modular manner for the task of visual question answering. The high level idea of the model is as follows. The question is first encoded into a bag of words representation (and passed through an MLP). Then different vision systems (which extract raw features, or compute attention on images using object detection outputs or face detection outputs or scene classification outputs) are all condensed into a representation compatible with the question. Finally the approach takes an outer product between the image representations from various tasks and the question, and concatenates the obtained representations. This concatenated feature is fed as input to an answering model which produces distributions over answer tokens. The entire model is trained with max-likelihood. Results on the VQA 1.0 as well as VQA 2.0 datasets show competitive performance with respect to the state of the art.
Strengths
1. At a high level the proposed approach is very well motivated since the vqa task can be thought of as an ensemble of different tasks at different levels of granularity in terms of visual reasoning. The approach has flavors of solving each task separately and then putting everything together for the vqa task.
2. The results feature ablations of the proposed approach which helps us understand the contributions of different modules in achieving the performance gains.
3. It is really encouraging that the approach obtains state of the art results on VQA. Traditionally there has been a gap between modular architectures which we “feel” should be good / right for the task of VQA and the actual performance realized by such models. This paper is a really nice contribution towards integrating different vision sub-problems for VQA, and as such is a really good step.
Weakness
1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA.
2. Given that the paper uses a billinear layer to combine representations, it should mention in related work the rich line of work in VQA, starting with [B] which uses billinear pooling for learning joint question image representations. Right now the manner in which things are presented a novice reader might think this is the first application of billinear operations for question answering (based on reading till the related work section). Billinear pooling is compared to later.
3. L151: Would be interesting to have some sort of a group norm in the final part of the model (g, Fig. 1) to encourage disentanglement further.
4. It is very interesting that the approach does not use an LSTM to encode the question. This is similar to the work on a simple baseline for VQA [C] which also uses a bag of words representation.
5. (*) Sec. 4.2 it is not clear how the question is being used to learn an attention on the image feature since the description under Sec. 4.2 does not match with the equation in the section. Speficially the equation does not have any term for r^q which is the question representation. Would be good to clarify. Also it is not clear what \sigma means in the equation. Does it mean the sigmoid activation? If so, multiplying two sigmoid activations (with the \alpha_v computation seems to do) might be ill conditioned and numerically unstable.
6. (*) Is the object detection based attention being performed on the image or on some convolutional feature map V \in R^{FxWxH}? Would be good to clarify. Is some sort of rescaling done based on the receptive field to figure out which image regions belong correspond to which spatial locations in the feature map?
7. (*) L254: Trimming the questions after the first 10 seems like an odd design choice, especially since the question model is just a bag of words (so it is not expensive to encode longer sequences).
8. L290: it would be good to clarify how the implemented billinear layer is different from other approaches which do billinear pooling. Is the major difference the dimensionality of embeddings? How is the billinear layer swapped out with the hadarmard product and MCB approaches? Is the compression of the representations using Equation. (3) still done in this case?
Minor Points:
- L122: Assuming that we are multiplying in equation (1) by a dense projection matrix, it is unclear how the resulting matrix is expected to be sparse (aren’t we mutliplying by a nicely-conditioned matrix to make sure everything is dense?).
- Likewise, unclear why the attended image should be sparse. I can see this would happen if we did attention after the ReLU but if sparsity is an issue why not do it after the ReLU?
Perliminary Evaluation
The paper is a really nice contribution towards leveraging traditional vision tasks for visual question answering. Major points and clarifications for the rebuttal are marked with a (*).
[A] Andreas, Jacob, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2015. “Neural Module Networks.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1511.02799.
[B] Fukui, Akira, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. “Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1606.01847.
[C] Zhou, Bolei, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. “Simple Baseline for Visual Question Answering.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1512.02167. |
nips_2017_1198 | Batch Renormalization: Towards Reducing Minibatch Dependence in Batch-Normalized Models
Batch Normalization is quite effective at accelerating and improving the training of deep models. However, its effectiveness diminishes when the training minibatches are small, or do not consist of independent samples. We hypothesize that this is due to the dependence of model layer inputs on all the examples in the minibatch, and different activations being produced between training and inference. We propose Batch Renormalization, a simple and effective extension to ensure that the training and inference models generate the same outputs that depend on individual examples rather than the entire minibatch. Models trained with Batch Renormalization perform substantially better than batchnorm when training with small or non-i.i.d. minibatches. At the same time, Batch Renormalization retains the benefits of batchnorm such as insensitivity to initialization and training efficiency. | In this paper, the authors propose Batch Renormalization technique to alleviate the problem of batchnorm when dealing with small or non-i.i.d minibatches. To reduce the dependence of large minibatch size is very important in many applications especially when training large neural network models with limited GPU memory. The proposed method is vey simple to understand and implement. And experiments show that Batch Renormalization performs well with non-i.i.d minibatches, and improves the results of small minibatches compared with batchnorm.
Firstly, the authors give a clear review of batchnorm, and conclude that the key drawbacks of batchnorm are the inconsistency of mean and variance used in training and inference and the instability when dealing with small minibatches. Using moving averages to perform normalization would be the first thought, however this would lead to the model blowing up. So the authors propose a simple batch renormalization method to combine minibatch mean and variance with moving averages. In my opinion, what Batch Renormalization does is to gradually changing from origin batchnorm (normalizing with minibatch mean and variance) to batchnorm with (almost) only moving averages. In this way, the model can adopt part of the advantage of moving averages, and converge successfully.
And I have several questions about this paper.
(1) When using large minibatch size (such as 32), why Batch Renormalization has no advantage compared with batchnorm. It seems that the consistency of mean and variance in training and inference does not help much in this case.
(2) Experiments show that the result of small minibatch size (batchsize=4) is worse that the result of large minibatch size (batchsize=32). So I wonder if using two (multiple) moving averages (mean and variance) with different update rates (such as one with 0.1, one with 0.01) would help. Small update rate helps to solve the inconstancy problem, large update rate helps to solve the small minibatch size problem.
(3) The results of how r_max and d_max will affect the performace are not provided. There seems to have plenty of parameter tuning work.
This work is good, and I am looking forward to seeing a more elegant solution to this problem. |
nips_2017_374 | MarrNet: 3D Shape Reconstruction via 2.5D Sketches
3D object reconstruction from a single image is a highly under-determined problem, requiring strong prior knowledge of plausible 3D shapes. This introduces challenges for learning-based approaches, as 3D object annotations are scarce in real images. Previous work chose to train on synthetic data with ground truth 3D information, but suffered from domain adaptation when tested on real data. In this work, we propose MarrNet, an end-to-end trainable model that sequentially estimates 2.5D sketches and 3D object shape. Our disentangled, two-step formulation has three advantages. First, compared to full 3D shape, 2.5D sketches are much easier to be recovered from a 2D image; models that recover 2.5D sketches are also more likely to transfer from synthetic to real data. Second, for 3D reconstruction from 2.5D sketches, systems can learn purely from synthetic data. This is because we can easily render realistic 2.5D sketches without modeling object appearance variations in real images, including lighting, texture, etc. This further relieves the domain adaptation problem. Third, we derive differentiable projective functions from 3D shape to 2.5D sketches; the framework is therefore end-to-end trainable on real images, requiring no human annotations. Our model achieves state-of-the-art performance on 3D shape reconstruction. | This paper describes a technique for estimating voxelized 3D shapes from single images. As directly predicting 3D shapes from images is hard, the paper proposes to separate the problem into 2 tasks - inspired by Marr's theory about vision. In the first part the method takes the image as input and predicts several intrinsic quantities ("2.5D sketch"), in particular depth, surface normals and the 2D silhouette using an encoder-decoder architecture. This information is fed into a second encoder-decoder network which predicts the volumetric 3D representation at 128^3 resolution. The advantage of the method is that the second part can be trained on synthetic data allowing for "domain transfer" based on the 2.5D sketches which are invariant to the actual appearance. Experiments (mainly qualitative) on Shapenet, IKEA and Pascal3D+ are presented.
Overall, I liked the idea. The paper is clear, well motivated and written and the qualitative results seem convincing. However, I have some concerns about the fairness of the evaluation and the self-supervised part which I like to have answered in the rebuttal before I turn towards a more positive score for this paper.
Positive aspects:
+ Very well written and motivated
+ Relevant topic in computer vision
+ Interesting idea of using intermediate intrinsic representations to facilitate the task
+ Domain transfer idea is convincing to me (though the self-supervision and fine-tuning of the real->sketch network are not fully clear to me, see below)
+ Qualitative results clearly superior than very recent baselines
Negative aspects:
- Self-supervision: It is not clear to me why this self-supervision should work. The paper says that the method fine-tunes on single images, but if the parameters are optimized for a single image, couldn't the network diverge to predict a different shape encapsulated in the shape space for an image depicting another object while still reducing the loss? Also, it is not very well specified what is finetuned exactly. It reads as if it was only the encoder of the second part. But how can the pre-trained first part then adapted to a new domain? If the first part fails, the second will as well.
- Domain adaptation: I agree that it is easy to replace the first part, but I don't see how this can be trained in the absence of real->sketch training pairs. To me it feels as the method profits from the fact that for cars and chairs pretraining on synthetic data already yields good models that work well with real data. I would expect that for stronger domain differences the method would not be able to work without sketch supervision. Isn't the major problem there to predict good sketches from little examples?
- Baselines: It is not clear if the direct (1-step) baseline in the experiments uses the same architecture or at least same #parameters as the proposed technique to have a fair evaluation. I propose an experiment where the same architecture is used but insteads of forcing the model (ie, regularizing it) to capture 2D sketches as intermediate representations, let it learn all parameters from a random initialization end-to-end. This would be a fair baseline and it would be interesting to see which representation emerges. Another fair experiment would be one which also has 2 encoder-decoder networks and the same number of parameters but maybe a different distribution of feature maps across layers. Finanlly a single encoder-decoder architecture with a similar architecture as the proposed one but increased by the number of parameters freed by removing the first part would be valuable. From the description in the paper it is totally unclear what the baseline in Fig. 4. is. I suggest such baselines for all experiments.
- DRC: It should be clarified how DRC is used here and if the architecture and resolution is adopted for fairness. Also, DRC is presented to allow for multi-view supervision which is not done here, so this should be commented upon. Further the results of the 3D GAN seem much more noisy compared to the original paper. Is there maybe a problem with the training?
- Quantitative Experiments: I do not fully agree with the authors that quantitative experiments are not useful. As the paper also evaluates on ShapeNet, quantitative experiments would be easily possible, and masks around the surfaces could be used to emphasize thin structures in the metric etc. I suggest to add this at least for ShapeNet.
Minor comments:
- Sec. 3.1: the deconvolution/upconvolution operations are not mentioned
- It is unclear how 128^3 voxel resolution can be achieved which typically doesn't fit into memory for reasonable batch sizes. Also, what is the batch size and other hyperparameters for training?
- Fig. 8,9,10: I suggest to show the predicted sketches also for the real examples for the above mentioned reasons. |
nips_2017_2565 | A simple neural network module for relational reasoning
Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called CLEVR, on which we achieve state-of-the-art, super-human performance; textbased question answering using the bAbI suite of tasks; and complex reasoning about dynamic physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Thus, by simply augmenting convolutions, LSTMs, and MLPs with RNs, we can remove computational burden from network components that are not well-suited to handle relational reasoning, reduce overall network complexity, and gain a general ability to reason about the relations between entities and their properties. | The paper proposes a plug and play module (called Relation Networks (RNs)) specialized for relational reasoning. The module is composed of Multi Layer Perceptrons and considers relations between all pairs of objects. The proposed module when plugged into traditional networks achieves state of the art performance on the CLEVR visual question answering dataset, state of the art (with joint training for all tasks) on the bAbI textual question answering dataset and high performance (93% on one task and 95% on another) on a newly collected dataset of simulated physical mass-spring systems. The paper also collects a dataset similar to CLEVR to demonstrate the effectiveness of the proposed RNs for relational questions.
Strengths:
1. The proposed Relation Network is a novel neural network specialized for relational reasoning. The success of the proposed network is extensively shown by experimenting with three different tasks and clearly analyzing the effectiveness for relational questions by collecting a novel dataset similar to CLEVR.
2. The proposed RNs have been shown to be able to work with different forms of input -- explicit state representations as well as features from a CNN or LSTM.
3. The paper is well written and the details of model architecture including hyperparameters are provided.
4. As argued in the paper, I agree that relational reasoning is central to intelligence and since RNs are shown to be able to achieve this reasoning and a result perform better at tasks requiring such reasoning than existing networks, RNs seem to be of significant importance for designing reasoning networks.
Weaknesses:
1. Could authors please analyze and comment on how complicated relations can be handled by RNs. Is it the case that RNs perform well for single hop relations such as "what is the color of the object closest to the blue object" which requires reasoning about only one hop relation (distance between blue object and all other objects), but not so well for multiple hop relations such as "What shape is the small object that is in front of the yellow matte thing and behind the gray sphere?". From the failure cases in table 1 of supplementary material, it seems that the model has difficulty in answering questions involving multiple hops of relations.
2. L203-204, it is not clear to me what do authors mean by "we tagged ... support set". Is this referring to some form of human annotation? If so, could authors please elaborate on what happens at test time?
3. All the datasets experimented with in the paper are synthetic datasets. Could authors please comment on how they expect the RNs to work on real datasets such as the VQA dataset from Antol et al.?
Post-rebuttal comments:
Authors have provided satisfactory response to my question about multi-hop reasoning. However, I would still like to see experiments on real VQA dataset to see how effective RNs are at dealing with the amount of variation real datapoints show (in vision as well as in language). So it would be great if authors could include results on the VQA dataset (Antol et al., ICCV 2015) in camera-ready. |
nips_2017_844 | ADMM without a Fixed Penalty Parameter: Faster Convergence with New Adaptive Penalization
Alternating direction method of multipliers (ADMM) has received tremendous interest for solving numerous problems in machine learning, statistics and signal processing. However, it is known that the performance of ADMM and many of its variants is very sensitive to the penalty parameter of a quadratic penalty applied to the equality constraints. Although several approaches have been proposed for dynamically changing this parameter during the course of optimization, they do not yield theoretical improvement in the convergence rate and are not directly applicable to stochastic ADMM. In this paper, we develop a new ADMM and its linearized variant with a new adaptive scheme to update the penalty parameter. Our methods can be applied under both deterministic and stochastic optimization settings for structured non-smooth objective function. The novelty of the proposed scheme lies at that it is adaptive to a local sharpness property of the objective function, which marks the key difference from previous adaptive scheme that adjusts the penalty parameter per-iteration based on certain conditions on iterates. On theoretical side, given the local sharpness characterized by an exponent θ ∈ (0, 1], we show that the proposed ADMM enjoys an improved iteration complexity of O(1/ 1−θ ) 1 in the deterministic setting and an iteration complexity of O(1/ 2(1−θ) ) in the stochastic setting without smoothness and strong convexity assumptions. The complexity in either setting improves that of the standard ADMM which only uses a fixed penalty parameter. On the practical side, we demonstrate that the proposed algorithms converge comparably to, if not much faster than, ADMM with a fine-tuned fixed penalty parameter. | Summary: This paper shows that O(1/eps) iteration complexity of ADMM can be improved to O(1/eps^(1-theta)) where theta is a parameter that characterizes how sharply the objective function increases with respect to increasing distance to the optimal solution. This improvement is shown under a locally adaptive version of the ADMM where the penalty parameter is increased after every $t$ steps of ADMM. The method is extended to stochastic ADMM whose O(1/eps^2) iteration complexity is shown to similarly improve. The results are backed by experiments on generalized Lasso problems.
Overall, the paper is well written and makes an important contribution towards improving the analysis of ADMM under adaptive penalty parameters. On a practical note, I would have liked to see some comparisons against other "adaptive-rho" heuristics used the literature (see Boyd's monograph). As background, it may be valuable to some readers to see how the local error bound relates to the KL property, and some examples of "theta" for problems of interest in machine learning.
There are lots of grammatical errors, typos and odd phrases in the the abstract: "tremendous interests" --> "tremendous interest", "variants are"-->"variants is", "penalty scheme of lies at it is...", "iterate message"..
The LADMM-AP behavior in plot 1(e) is somewhat strange. Any explanations?
If factor of 2 in LA-ADMM optimal in any sense? In practice ADMM stopping criteria include primal-dual tolerance thresholds. Shouldnt LA-ADMM use those instead of fixed t?
With regards to linearized ADMM, Eqn 7, please comment on its relationship to Chambelle-Pock updates (https://hal.archives-ouvertes.fr/hal-00490826/document) |
nips_2017_2138 | YASS: Yet Another Spike Sorter
Spike sorting is a critical first step in extracting neural signals from large-scale electrophysiological data. This manuscript describes an efficient, reliable pipeline for spike sorting on dense multi-electrode arrays (MEAs), where neural signals appear across many electrodes and spike sorting currently represents a major computational bottleneck. We present several new techniques that make dense MEA spike sorting more robust and scalable. Our pipeline is based on an efficient multistage "triage-then-cluster-then-pursuit" approach that initially extracts only clean, high-quality waveforms from the electrophysiological time series by temporarily skipping noisy or "collided" events (representing two neurons firing synchronously). This is accomplished by developing a neural network detection method followed by efficient outlier triaging. The clean waveforms are then used to infer the set of neural spike waveform templates through nonparametric Bayesian clustering. Our clustering approach adapts a "coreset" approach for data reduction and uses efficient inference methods in a Dirichlet process mixture model framework to dramatically improve the scalability and reliability of the entire pipeline. The "triaged" waveforms are then finally recovered with matching-pursuit deconvolution techniques. The proposed methods improve on the state-of-the-art in terms of accuracy and stability on both real and biophysically-realistic simulated MEA data. Furthermore, the proposed pipeline is efficient, learning templates and clustering faster than real-time for a ' 500-electrode dataset, largely on a single CPU core. | [UPDATE AFTER AUTHOR RESPONSE]
The authors' response confirms my rating – it's a valuable paper. I'm optimistic that they will address mine and the other reviewers' concerns in their revision.
[ORIGINAL REVIEW]
The paper describes YASS: Yet Another Spike Sorter, a well-engineered pipeline for sorting multi-electrode array (MEA) recordings. The approach is sound, combining many state-of-the-art approaches into a complete pipeline. The paper is well written, follows a clear structure and provides convincing evidence that the work indeed advances the state of the art for sorting retinal MEA data.
While the paper is already a valuable contribution as is, I think it has much greater potential if the authors address some of the issues detailed below. In brief, my main concerns are
(1) the lack of publicly available code,
(2) the poor description of the neural network detection method,
(3) issues with applying the pipeline to cortical data.
Details:
(1) Code is not available (or at least the manuscript does not provide a URL). Since spike sorting is at this point mainly an engineering problem (but a non-trivial one), a mere description of the approach is only half as valuable as the actual code implementing the pipeline. Thus, I strongly encourage the authors to go the extra mile and make the code available.
(2) Neural network spike detection. This part seems to be the only truly innovative one (all other components of the pipeline have been described/used before). However, it remains unclear to me how the authors generated their training data. Section C.2 describes different ways of generating training data, but it is not clear which one (or which combination) the authors use.
(a) Using pre-existing sorts.
First, most labs do *not* have existing, properly sorted data available when moving to dense MEAs, because they do not have a pipeline for sorting such array data and – as the authors point out – existing methods do not scale properly.
Second, it is not clear to me how existing sorts should help training a more robust neural network for detection. Did the authors inspect every single waveform snippet and labeled it as clean or not? If not, based on what algorithm did they decide which waveform snippets in the training data are clean? Why do they need to train a neural network instead of just using this algorithm? How do they deal with misalignments, which create label noise?
If using pre-existing sorts is what the authors did, they need to provide more information on how exactly they did it and why it works. In the current form, their work cannot be reproduced.
(b) Generating synthetic training data by superimposing waveform templates on background noise. This could be a reasonable approach. Is it used for data augmentation or not at all and just described as a potential alternative? What is the evidence that this approach is useful? The synthetic data may not be representative of real recordings.
(3) Generalization to cortical data. I am quite confident that the pipeline works well for retinal data, but I doubt that it will do so for cortical data (some arguments below). I think this limitation needs to be discussed and acknowledged more explicitly (abstract, intro, conclusions).
(a) In cortical recordings, waveform drift is a serious issue that arises in pretty much all non-chronic recordings (and chronically working high-density MEAs are still to be demonstrated). Thus, modeling drift is absolutely crucial for recordings that last longer than a few minutes.
(b) Getting good training data for the NN detection is more difficult. Good ground truth (or well validated data such as described in appendix I) is not available and generating synthetic data as described in C.2 is not necessarily realistic, since background noise is caused by spikes as well and neurons often fire in highly correlated manners (thus rendering the approach of overlaying templates on spike-free noise problematic).
Minor comments:
- Fig. 3 bottom panel: Y axis is strange. Does 1^-x mean 10^-x? Also, it exaggerates tiny accuracy differences between 0.99 and 0.999, where both methods are essentially perfect.
- The authors use spatially whitened data (according to section 2.1), but I did not find a description of the spatial whitening procedure in the manuscript or supplement. |
nips_2017_2139 | Independence clustering (without a matrix)
The independence clustering problem is considered in the following formulation: given a set S of random variables, it is required to find the finest partitioning {U 1 , . . . , U k } of S into clusters such that the clusters U 1 , . . . , U k are mutually independent. Since mutual independence is the target, pairwise similarity measurements are of no use, and thus traditional clustering algorithms are inapplicable. The distribution of the random variables in S is, in general, unknown, but a sample is available. Thus, the problem is cast in terms of time series. Two forms of sampling are considered: i.i.d. and stationary time series, with the main emphasis being on the latter, more general, case. A consistent, computationally tractable algorithm for each of the settings is proposed, and a number of fascinating open directions for further research are outlined. | ABOUT:
This paper is about clustering N random variable into k mutually independent clusters. It considers the cases when k is known and unknown, as well as the cases when the distribution of the N variables is known, when it needs to be estimated from n i.i.d. samples, and when it needs to be estimated from stationary samples. Section 3 provides an optimal algorithm for known distributions which uses at most a 2kN^2 oracle calls. Subsequent sections build on this algorithm in more complicated cases.
I do not know the clustering literature well enough to put this work in context. However, if the overview of the prior work is accurate, Sections 3 and 4 are a nice and important contribution. I have some reservations about Section 5. Overall, the exposition of results could be improved.
COMMENTS:
(1) I am not entirely convinced by the proof for Theorem 1. I believe the statement of the result is probably correct, but the proof needs to be more rigorous. It could be useful to state what exactly is meant by mutual independents for the benefit of the reader* and then to highlight in the proof how the algorithm does achieve a mutually independent clustering. For the Split function part, an inductive proof could be one way to make the proof rigorous and easier to follow.
(2) It does not seem that the stationary case in Section 5 is covered adequately. It kind of reads like the authors tried to do too much and actually accomplished little. In parts, it just looks like a literature review or a math tutorial (e.g. Section 5.1).
- I am not sure what the issue is with mutual information rate being zero is. One natural extension of the set up in Section 4 would be to cluster the N random variables according to their stationary distribution at time i with the n time series samples being then used to estimate this distribution as well as possible. In that case the mutual information is perfectly well defined and everything follows through. This does not seem to be what is actually being done, and instead the clustering is being performed over a whole infinite time series.
- The sum information in Definition 1 seems far less interesting and fundamental than the authors try to make it out to be. Also, a side note, people have spent time thinking about what a mutual information rate for a random process should be in full generality [A] and this question maybe deserves more attention than it is given here.
*I understand that some may find this too basic. But, having this definition side by side with the proof would help the reader confirm correctness. Somewhat more importantly, this issue of pairwise independence vs mutual independence is the crux of the paper. It should be stated explicitly (i.e. using math) what that means.
MINOR COMMENT
- Line 156: N not n
[A] Te Sun Han, Information-Spectrum Method in Information Theory .
Springer, 2003 |