id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
1502.03167#28
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
nity. BN-x30: Like BN-x5, but with the initial learning rate 0.045 (30 times that of Inception). BN-x5-Sigmoid: Like BN-x5, but with sigmoid non- 1+exp(â x) instead of ReLU. We also at- linearity g(t) = tempted to train the original Inception with sigmoid, but the model remained at the accuracy equivalent to chance. In Figure 2, we show the validation accuracy of the networks, as a function of the number of training steps. 106 Inception reached the accuracy of 72.2% after 31 training steps. The Figure 3 shows, for each network, the number of training steps required to reach the same 72.2% accuracy, as well as the maximum validation accu- racy reached by the network and the number of steps to reach it. By only using Batch Normalization (BN-Baseline), we match the accuracy of Inception in less than half the num- ber of training steps.
1502.03167#27
1502.03167#29
1502.03167
[ "1502.03167" ]
1502.03167#29
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
By applying the modiï¬ cations in Sec. 4.2.1, we signiï¬ cantly increase the training speed of the network. BN-x5 needs 14 times fewer steps than In- Interestingly, in- ception to reach the 72.2% accuracy. creasing the learning rate further (BN-x30) causes the model to train somewhat slower initially, but allows it to 106 reach a higher ï¬ nal accuracy. It reaches 74.8% after 6 steps, i.e. 5 times fewer steps than required by Inception to reach 72.2%. We also veriï¬ ed that the reduction in internal covari- ate shift allows deep networks with Batch Normalization
1502.03167#28
1502.03167#30
1502.03167
[ "1502.03167" ]
1502.03167#30
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
7 Model Inception BN-Baseline BN-x5 BN-x30 BN-x5-Sigmoid Steps to 72.2% Max accuracy 72.2% 72.7% 73.0% 74.8% 69.8% 106 106 106 106 31.0 13.3 2.1 2.7 · · · · Figure 3: For Inception and the batch-normalized variants, the number of training steps required to reach the maximum accuracy of Inception (72.2%), and the maximum accuracy achieved by the net- work. to be trained when sigmoid is used as the nonlinearity, despite the well-known difï¬ culty of training such net- works. Indeed, BN-x5-Sigmoid achieves the accuracy of 69.8%. Without Batch Normalization, Inception with sig- moid never achieves better than 1/1000 accuracy.
1502.03167#29
1502.03167#31
1502.03167
[ "1502.03167" ]
1502.03167#31
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
# 4.2.3 Ensemble Classiï¬ cation The current reported best results on the ImageNet Large Scale Visual Recognition Competition are reached by the Deep Image ensemble of traditional models (Wu et al., 2015) and the ensemble model of (He et al., 2015). The latter reports the top-5 error of 4.94%, as evaluated by the ILSVRC server. Here we report a top-5 validation error of 4.9%, and test error of 4.82% (according to the ILSVRC server). This improves upon the previous best result, and exceeds the estimated accuracy of human raters according to (Russakovsky et al., 2014). For our ensemble, we used 6 networks. Each was based on BN-x30, modiï¬ ed via some of the following: increased initial weights in the convolutional layers; using Dropout (with the Dropout probability of 5% or 10%, vs. 40% for the original Inception); and using non-convolutional, per-activation Batch Normalization with last hidden lay- ers of the model. Each network achieved its maximum 106 training steps. The ensemble accuracy after about 6 prediction was based on the arithmetic average of class probabilities predicted by the constituent networks. The details of ensemble and multicrop inference are similar to (Szegedy et al., 2014).
1502.03167#30
1502.03167#32
1502.03167
[ "1502.03167" ]
1502.03167#32
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
# Sep 201 We demonstrate in Fig. 4 that batch normalization al- lows us to set new state-of-the-art by a healthy margin on the ImageNet classiï¬ cation challenge benchmarks. # 5 Conclusion We have presented a novel mechanism for dramatically accelerating the training of deep networks. It is based on the premise that covariate shift, which is known to com- plicate the training of machine learning systems, also ap- Model GoogLeNet ensemble Deep Image low-res Deep Image high-res Deep Image ensemble BN-Inception single crop BN-Inception multicrop BN-Inception ensemble 224 256 512 variable 224 224 224 144 - - - 1 144 144 - - 24.88 - 25.2% 21.99% 20.1% Figure 4: Batch-Normalized Inception comparison with previous state of the art on the provided validation set com- prising 50000 images. *BN-Inception ensemble has reached 4.82% top-5 error on the 100000 images of the test set of the ImageNet as reported by the test server. plies to sub-networks and layers, and removing it from internal activations of the network may aid in training. Our proposed method draws its power from normalizing activations, and from incorporating this normalization in the network architecture itself. This ensures that the nor- malization is appropriately handled by any optimization method that is being used to train the network. To en- able stochastic optimization methods commonly used in deep network training, we perform the normalization for each mini-batch, and backpropagate the gradients through the normalization parameters. Batch Normalization adds only two extra parameters per activation, and in doing so preserves the representation ability of the network. We presented an algorithm for constructing, training, and per- forming inference with batch-normalized networks. The resulting networks can be trained with saturating nonlin- earities, are more tolerant to increased training rates, and often do not require Dropout for regularization. Merely adding Batch Normalization to a state-of-the- art image classiï¬ cation model yields a substantial speedup in training. By further increasing the learning rates, re- moving Dropout, and applying other modiï¬ cations af- forded by Batch Normalization, we reach the previous state of the art with only a small fraction of training steps â
1502.03167#31
1502.03167#33
1502.03167
[ "1502.03167" ]
1502.03167#33
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
and then beat the state of the art in single-network image classiï¬ cation. Furthermore, by combining multiple mod- els trained with Batch Normalization, we perform better than the best known system on ImageNet, by a signiï¬ cant margin. Interestingly, our method bears similarity to the stan- dardization layer of (G¨ulc¸ehre & Bengio, 2013), though the two methods stem from very different goals, and per- form different tasks. The goal of Batch Normalization is to achieve a stable distribution of activation values throughout training, and in our experiments we apply it before the nonlinearity since that is where matching the ï¬ rst and second moments is more likely to result in a stable distribution. On the contrary, (G¨ulc¸ehre & Bengio, 2013) apply the standardization layer to the output of the nonlinearity, which results in sparser activations. In our large-scale image classiï¬ cation experiments, we have not observed the nonlinearity inputs to be sparse, neither with nor without Batch Normalization. Other notable differ- entiating characteristics of Batch Normalization include the learned scale and shift that allow the BN transform to represent identity (the standardization layer did not re- quire this since it was followed by the learned linear trans- form that, conceptually, absorbs the necessary scale and shift), handling of convolutional layers, deterministic in- ference that does not depend on the mini-batch, and batch- normalizing each convolutional layer in the network. In this work, we have not explored the full range of possibilities that Batch Normalization potentially enables. Our future work includes applications of our method to Recurrent Neural Networks (Pascanu et al., 2013), where the internal covariate shift and the vanishing or exploding gradients may be especially severe, and which would al- low us to more thoroughly test the hypothesis that normal- ization improves gradient propagation (Sec. 3.3). We plan to investigate whether Batch Normalization can help with domain adaptation, in its traditional sense â i.e. whether the normalization performed by the network would al- low it to more easily generalize to new data distribu- tions, perhaps with just a recomputation of the population means and variances (Alg. 2).
1502.03167#32
1502.03167#34
1502.03167
[ "1502.03167" ]
1502.03167#34
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Finally, we believe that fur- ther theoretical analysis of the algorithm would allow still more improvements and applications. # References Bengio, Yoshua and Glorot, Xavier. Understanding the difï¬ culty of training deep feedforward neural networks. In Proceedings of AISTATS 2010, volume 9, pp. 249â 256, May 2010. Dean, Jeffrey, Corrado, Greg S., Monga, Rajat, Chen, Kai, Devin, Matthieu, Le, Quoc V., Mao, Mark Z., Ranzato, Marcâ Aurelio, Senior, Andrew, Tucker, Paul, Yang, Ke, and Ng, Andrew Y.
1502.03167#33
1502.03167#35
1502.03167
[ "1502.03167" ]
1502.03167#35
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Large scale distributed deep net- works. In NIPS, 2012. Desjardins, Guillaume and Kavukcuoglu, Koray. Natural neural networks. (unpublished). Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic 8 optimization. J. Mach. Learn. Res., 12:2121â 2159, July 2011. ISSN 1532-4435. G¨ulc¸ehre, C¸ aglar and Bengio, Yoshua. Knowledge mat- ters: Importance of prior information for optimization. CoRR, abs/1301.4083, 2013. He, K., Zhang, X., Ren, S., and Sun, J. Delving Deep into Rectiï¬ ers: Surpassing Human-Level Performance on ImageNet Classiï¬ cation. ArXiv e-prints, February 2015. Hyv¨arinen, A. and Oja, E.
1502.03167#34
1502.03167#36
1502.03167
[ "1502.03167" ]
1502.03167#36
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Independent component anal- ysis: Algorithms and applications. Neural Netw., 13 (4-5):411â 430, May 2000. Jiang, Jing. A literature survey on domain adaptation of statistical classiï¬ ers, 2008. LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recog- nition. Proceedings of the IEEE, 86(11):2278â 2324, November 1998a. LeCun, Y., Bottou, L., Orr, G., and Muller, K. Efï¬ cient backprop. In Orr, G. and K., Muller (eds.), Neural Net- works: Tricks of the trade. Springer, 1998b. Lyu, S and Simoncelli, E P.
1502.03167#35
1502.03167#37
1502.03167
[ "1502.03167" ]
1502.03167#37
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Nonlinear image representa- tion using divisive normalization. In Proc. Computer Vision and Pattern Recognition, pp. 1â 8. IEEE Com- puter Society, Jun 23-28 2008. doi: 10.1109/CVPR. 2008.4587821. Nair, Vinod and Hinton, Geoffrey E. Rectiï¬ ed linear units improve restricted boltzmann machines. In ICML, pp. 807â 814. Omnipress, 2010. Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua.
1502.03167#36
1502.03167#38
1502.03167
[ "1502.03167" ]
1502.03167#38
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
On the difï¬ culty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16- 21 June 2013, pp. 1310â 1318, 2013. Povey, Daniel, Zhang, Xiaohui, and Khudanpur, San- jeev. Parallel training of deep neural networks with CoRR, natural gradient and parameter averaging. abs/1410.7455, 2014. Raiko, Tapani, Valpola, Harri, and LeCun, Yann. Deep learning made easier by linear transformations in per- ceptrons.
1502.03167#37
1502.03167#39
1502.03167
[ "1502.03167" ]
1502.03167#39
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
In International Conference on Artiï¬ cial In- telligence and Statistics (AISTATS), pp. 924â 932, 2012. Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpa- thy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Fei-Fei, Li. ImageNet Large Scale Visual Recognition Challenge, 2014.
1502.03167#38
1502.03167#40
1502.03167
[ "1502.03167" ]
1502.03167#40
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
9 Saxe, Andrew M., McClelland, James L., and Ganguli, Surya. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. CoRR, abs/1312.6120, 2013. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2):227â 244, October 2000. Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A simple way to prevent neural networks from overï¬
1502.03167#39
1502.03167#41
1502.03167
[ "1502.03167" ]
1502.03167#41
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
t- ting. J. Mach. Learn. Res., 15(1):1929â 1958, January 2014. Sutskever, Ilya, Martens, James, Dahl, George E., and Hinton, Geoffrey E. On the importance of initial- ization and momentum in deep learning. In ICML (3), volume 28 of JMLR Proceedings, pp. 1139â 1147. JMLR.org, 2013. Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Du- mitru, Vanhoucke, Vincent, and Rabinovich, An- CoRR, Going deeper with convolutions. drew. abs/1409.4842, 2014. Wiesler, Simon and Ney, Hermann. A convergence anal- ysis of log-linear training. In Shawe-Taylor, J., Zemel, R.S., Bartlett, P., Pereira, F.C.N., and Weinberger, K.Q. (eds.), Advances in Neural Information Processing Sys- tems 24, pp. 657â
1502.03167#40
1502.03167#42
1502.03167
[ "1502.03167" ]
1502.03167#42
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
665, Granada, Spain, December 2011. Wiesler, Simon, Richard, Alexander, Schl¨uter, Ralf, and Ney, Hermann. Mean-normalized stochastic gradient for large-scale deep learning. In IEEE International Conference on Acoustics, Speech, and Signal Process- ing, pp. 180â 184, Florence, Italy, May 2014. Wu, Ren, Yan, Shengen, Shan, Yi, Dang, Qingqing, and Sun, Gang. Deep image: Scaling up image recognition, 2015. # Appendix # Variant of the Inception Model Used Figure 5 documents the changes that were performed compared to the architecture with respect to the GoogleNet archictecture. For the interpretation of this table, please consult (Szegedy et al., 2014). The notable architecture changes compared to the GoogLeNet model include: 5 convolutional layers are replaced by two The 5 à consecutive 3 This in- creases the maximum depth of the network by 9 weight layers. Also it increases the number of pa- rameters by 25% and the computational cost is in- creased by about 30%.
1502.03167#41
1502.03167#43
1502.03167
[ "1502.03167" ]
1502.03167#43
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
â ¢ The number 28 from 2 to 3. à 28 inception modules is increased Inside the modules, sometimes average, sometimes maximum-pooling is employed. This is indicated in the entries corresponding to the pooling layers of the table. There are no across the board pooling layers be- tween any two Inception modules, but stride-2 con- volution/pooling layers are employed before the ï¬ l- ter concatenation in the modules 3c, 4e. Our model employed separable convolution with depth multiplier 8 on the ï¬ rst convolutional layer. This reduces the computational cost while increasing the memory con- sumption at training time.
1502.03167#42
1502.03167#44
1502.03167
[ "1502.03167" ]
1502.03167#44
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
10 #3Ã 3 reduce double #3Ã 3 reduce patch size/ stride 7Ã 7/2 3Ã 3/2 3Ã 3/1 3Ã 3/2 output size 112Ã 112Ã 64 56Ã 56Ã 64 56Ã 56Ã 192 28Ã 28Ã 192 28Ã 28Ã 256 28Ã 28Ã 320 28Ã 28Ã 576 14Ã 14Ã 576 14Ã 14Ã 576 14Ã 14Ã 576 14Ã 14Ã 576 14Ã 14Ã 1024 7Ã 7Ã 1024 7Ã 7Ã 1024 1Ã 1Ã 1024 double #3Ã 3 #3Ã 3 depth #1Ã 1 Pool +proj type convolution* max pool convolution max pool inception (3a) inception (3b) inception (3c) inception (4a) inception (4b) inception (4c) inception (4d) inception (4e) inception (5a) inception (5b) avg pool 1 0 1 0 3 3 3 3 3 3 3 3 3 3 0 192 64 64 96 160 96 128 160 192 192 320 320 64 64 0 224 192 160 96 0 352 352 64 64 64 96 96 128 160 192 160 192 64 64 128 64 96 128 128 128 192 192 96 96 96 128 128 160 192 256 224 224 avg + 32 avg + 64 max + pass through avg + 128 avg + 128 avg + 128 avg + 128 max + pass through avg + 128 max + 128 stride 2 stride 2 7Ã 7/1 Figure 5: Inception architecture
1502.03167#43
1502.03167#45
1502.03167
[ "1502.03167" ]
1502.03167#45
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
11
1502.03167#44
1502.03167
[ "1502.03167" ]
1502.02251#0
From Pixels to Torques: Policy Learning with Deep Dynamical Models
5 1 0 2 n u J 8 1 ] L M . t a t s [ 3 v 1 5 2 2 0 . 2 0 5 1 : v i X r a # From Pixels to Torques: Policy Learning with Deep Dynamical Models # Niklas Wahlstr¨om Division of Automatic Control, Link¨oping University, Link¨oping, Sweden [email protected] # Thomas B. Sch¨on Department of Information Technology, Uppsala University, Sweden # [email protected] # Marc Peter Deisenroth Department of Computing, Imperial College London, United Kingdom [email protected] # Abstract Data-efï¬
1502.02251#1
1502.02251
[ "1504.00702" ]
1502.02251#1
From Pixels to Torques: Policy Learning with Deep Dynamical Models
cient learning in continuous state-action spaces using very high-dimensional observations remains a key challenge in developing fully autonomous systems. In this paper, we con- the pix- sider one instance of this challenge, els to torques problem, where an agent must learn a closed-loop control policy from pixel in- formation only. We introduce a data-efï¬ cient, model-based reinforcement learning algorithm that learns such a closed-loop policy directly from pixel information. The key ingredient is a deep dynamical model that uses deep auto- encoders to learn a low-dimensional embedding of images jointly with a predictive model in this low-dimensional feature space. Joint learning ensures that not only static but also dynamic properties of the data are accounted for. This is crucial for long-term predictions, which lie at the core of the adaptive model predictive con- trol strategy that we use for closed-loop con- trol. Compared to state-of-the-art reinforcement learning methods for continuous states and ac- tions, our approach learns quickly, scales to high- dimensional state spaces and is an important step toward fully autonomous learning from pixels to torques. mation, (3) take new information into account for learning and adaptation. Effectively, any fully autonomous system has to close this perception-action-learning loop without relying on speciï¬ c human expert knowledge. The pixels to torques problem (Brock, 2011) identiï¬ es key aspects of an autonomous system: autonomous thinking and decision making using sensor measurements only, intelligent explo- ration and learning from mistakes. We consider the problem of learning closed-loop policies (â
1502.02251#0
1502.02251#2
1502.02251
[ "1504.00702" ]
1502.02251#2
From Pixels to Torques: Policy Learning with Deep Dynamical Models
torquesâ ) from pixel information end-to-end. A possible scenario is a scene in which a robot is moving about. The only available sensor information is provided by a camera, i.e., no direct information of the robotâ s joint conï¬ gura- tion is available. The objective is to learn a continuous- valued policy that allows the robotic agent to solve a task in this continuous environment in a data-efï¬ cient way, i.e., we want to keep the number of trials small. To date, there is no fully autonomous system that convincingly closes the perception-action-learning loop and solves the pixels to torques problem in continuous state-action spaces, the natural domains in robotics. A promising approach toward solving the pixels to torques problem is Reinforcement Learning (RL) (Sutton & Barto, 1998), a principled mathematical framework that deals with fully autonomous learning from trial and error. How- ever, one practical shortcoming of many existing RL algo- rithms is that they require many trials to learn good poli- cies, which is prohibitive when working with real-world mechanical plants or robots.
1502.02251#1
1502.02251#3
1502.02251
[ "1504.00702" ]
1502.02251#3
From Pixels to Torques: Policy Learning with Deep Dynamical Models
# 1. Introduction The vision of fully autonomous and intelligent systems that learn by themselves has inï¬ uenced AI and robotics re- search for many decades. To devise fully autonomous sys- tems, it is necessary to (1) process perceptual data (e.g., im- ages) to summarize knowledge about the surrounding envi- ronment and the systemâ s behavior in this environment, (2) make decisions based on uncertain and incomplete infor- One way of using data efï¬ ciently (and therefore keep the number of experiments small) is to learn forward models of the underlying dynamical system, which are then used for internal simulations and policy learning. These ideas have been successfully applied to RL, control and robotics in (Schmidhuber, 1990; Atkeson & Schaal, 1997; Bagnell & Schneider, 2001; Contardo et al., 2013; From Pixels to Torques: Policy Learning with Deep Dynamical Models Image at time t-1 Vr-1 Zr] Feature at time t-1 Prediction model Encoder â
1502.02251#2
1502.02251#4
1502.02251
[ "1504.00702" ]
1502.02251#4
From Pixels to Torques: Policy Learning with Deep Dynamical Models
â __> â â â > g! Decoder eS g Feature at time t Image at time t Zt vr Figure 1. Illustration of our idea of combining deep learning architectures for feature learning and prediction models in feature space. A camera observes a robot approaching an object. A good low-dimensional feature representation of an image is important for learning a predictive model if the camera is the only sensor available. Pan & Theodorou, 2014; Deisenroth et al., 2015; Pan & Theodorou, 2014; van Hoof et al., 2015; Levine et al., 2015), for instance. However, these methods use heuris- tic or engineered low-dimensional features, and they do not easily scale to data-efï¬ cient RL using pixel informa- tion only because even â smallâ images possess thousands of dimensions. we can use for internal simulation of the dynamical sys- tem. For this purpose, we employ deep auto-encoders for the lower-dimensional embedding and a multi-layer feed- forward neural network for the transition function. We use this deep dynamical model to predict trajectories and apply an adaptive model-predictive-control (MPC) algo- rithm (Mayne, 2014) for online closed-loop control, which is practically based on pixel information only. A common way of dealing with high-dimensional data is to learn low-dimensional feature representations. Deep learn- ing architectures, such as deep neural networks (Hinton & Salakhutdinov, 2006), stacked auto-encoders (Bengio et al., 2007; Vincent et al., 2008), or convolutional neu- ral networks (LeCun et al., 1998), are the current state of the art in learning parsimonious representations of high- dimensional data. Deep learning has been successfully ap- plied to image, text and speech data in commercial prod- ucts, e.g., by Google, Amazon and Facebook. Deep learning has been used to produce ï¬ rst promising results in the context of model-free RL on images: For instance, (Mnih et al., 2015) present an approach based on Deep-Q-learning, in which human-level game strategies are learned autonomously, purely based on pixel informa- tion.
1502.02251#3
1502.02251#5
1502.02251
[ "1504.00702" ]
1502.02251#5
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Moreover, (Lange et al., 2012) presented an approach that learns good discrete actions to control a slot car based on raw images, employing deep architectures for ï¬ nding compact low-dimensional representations. Other examples of deep learning in the context of RL on image data in- clude (Cuccu et al., 2011; Koutnik et al., 2013). These ap- proaches have in common that they try to estimate the value function from which the policy is derived. However, nei- ther of these algorithms learns a predictive model and are, therefore, prone to data inefï¬ ciency, either requiring data collection from millions of experiments or relying on dis- cretization and very low-dimensional feature spaces, limit- ing their applicability to mechanical systems. To increase data efï¬ ciency, we therefore introduce a model- based approach to learning from pixels to torques. In par- ticular, exploit results from (Wahlstr¨om et al., 2015) and jointly learn a lower-dimensional embedding of images and a transition function in this lower-dimensional space that MPC has been well explored in the control community, However, adaptive MPC has so far not received much atten- tion in the literature (Mayne, 2014). An exception is (Sha, 2008), where the authors advocate a neural network ap- proach similar to ours. However, they do not consider high- dimensional data but assume that they have direct access to low-dimensional measurements. Our approach beneï¬ ts from the application of model- based optimal control principles within a machine learn- ing framework. Along these lines, (Deisenroth et al., 2009; Abramova et al., 2012; Boedecker et al., 2014; Pan & Theodorou, 2014; Levine et al., 2015) suggested to ï¬ rst learn a transition model and then use optimal control meth- ods to solve RL problems. Unlike these methods, our ap- proach does not need to estimate value functions and scales to high-dimensional problems. Similar to our approach, (Boots et al., 2014; Levine et al., 2015; van Hoof et al., 2015) recently proposed model- based RL methods that learn policies directly from vi- sual information.
1502.02251#4
1502.02251#6
1502.02251
[ "1504.00702" ]
1502.02251#6
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Unlike these methods, we exploit a low- dimensional feature representation that allows for fast pre- dictions and online control learning via MPC. # Problem Set-up and Objective We consider a classical N-step finite-horizon RL setting in which an agent attempts to solve a particular task by trial and error. In particular, our objective is to find a closed-loop policy 7* that minimizes the long-term cost v= yea fo(xt, uz), where fo denotes an immediate cost, 7, â ¬ R? is the continuous-valued system state and uz â ¬ RF are continuous control inputs. â
1502.02251#5
1502.02251#7
1502.02251
[ "1504.00702" ]
1502.02251#7
From Pixels to Torques: Policy Learning with Deep Dynamical Models
From Pixels to Torques: Policy Learning with Deep Dynamical Models Input layer (high-dim. data) Y1yt Hidden layer (feature) Output layer (reconstructed) YL Encoder g~1 Decoder g Figure 2. Auto-encoder that consists of an encoder g~! and a decoder g. The encoder maps the original image yw â ¬ Râ ¢ onto its low-dimensional representation z; = goâ (ys) eRâ , where m < M; the decoder maps this feature back to a high- dimensional representation 7 = g(Z). The gray color represents high-dimensional observations. High-dim. observations Features Control inputs Figure 3. Prediction model: Each feature z; is computed from high-dimensional data y; via the encoder g~'. The transition model predicts the feature 2,41)/,,, at the next time step based on the n-step history of n past features z;-n41,..., Z¢ and con- trol inputs weâ n+1,-.., ut. The predicted feature 241), can be mapped to a high-dimensional prediction #41 via the decoder g. The gray color represents high-dimensional observations. # 2.1. Deep Auto-Encoder The learning agent faces the following additional chal- lenges: (a) The agent has no access to the true state, but perceives the environment only through high-dimensional pixel information (images), (b) a good control policy is re- quired in only a few trials. This setting is practically rel- evant, e.g., when the agent is a robot that is monitored by a video camera based on which the robot has to learn to solve tasks fully autonomously. Therefore, this setting is an instance of the pixels to torques problem.
1502.02251#6
1502.02251#8
1502.02251
[ "1504.00702" ]
1502.02251#8
From Pixels to Torques: Policy Learning with Deep Dynamical Models
# 2. Deep Dynamical Model We use a deep auto-encoder for embedding images in a low-dimensional feature space, where both the encoder g~! and the decoder g are modeled with deep neural networks. Each layer k of the encoder neural network g~! computes yt) = (Any + by), where o is a sigmoidal acti- vation function (we used arctan) and A, and by are free parameters. The input to the first layer is the image, i.e., (1) Y= Yt The last layer is the low-dimensional fea- ture representation of the image z:(Oz) = g~'(yt;@e), where 6 = [..., Ax, bx, -..] are the parameters of all neu- ral network layers. The decoder g consists of the same number of layers in reverse order, see Fig. 2, and ap- proximately inverts the encoder g, such that %; (9g, 0p) = 9(g~* (yt; 9E); OD) © ys is the reconstructed version of yz with an associated reconstruction error Our approach to solve the pixels-to-torques problem is based on a deep dynamical model (DDM), which jointly (i) embeds high-dimensional images in a low-dimensional feature space via deep auto-encoders and (ii) learns a pre- dictive forward model in this feature space (Wahlstro6m et al., 2015). In particular, we consider a DDM with con- trol inputs u and high-dimensional observations y. We as- sume that the relevant properties of y can be compactly represented by a feature variable z. The two components of the DDM, i.e., the low-dimensional embedding and the prediction model, which predicts future observations yt+1 based on past observations and control inputs, are de- tailed in the following. Throughout this paper, y, denotes the high-dimensional measurements, z, the corresponding low-dimensional encoded features and %; the reconstructed high-dimensional measurement. Further, 2,41 and #41 de- note a predicted feature and measurement at time t + 1, respectively. εR t (θE, θD) = yt (1) # â Ge(Oe, OD).
1502.02251#7
1502.02251#9
1502.02251
[ "1504.00702" ]
1502.02251#9
From Pixels to Torques: Policy Learning with Deep Dynamical Models
The main purpose of the deep auto-encoder is to keep this reconstruction error and the associated compression loss negligible, such that the features zt are a compact repre- sentation of the images yt. # 2.2. Prediction Model We now turn the static auto-encoder into a dynamical model that can predict future features 2, and images Ji41. The encoder g~+ allows us to map high-dimensional observations y; onto low-dimensional features z;. For pre- dicting we assume that future features 241 ,, depend on an n-step history h, of past features and control inputs, ie., Zr ajh, (Op) = f (Zt, Ue, ++ Ztâ ng1, Weng; Op), (2)
1502.02251#8
1502.02251#10
1502.02251
[ "1504.00702" ]
1502.02251#10
From Pixels to Torques: Policy Learning with Deep Dynamical Models
From Pixels to Torques: Policy Learning with Deep Dynamical Models where f is a nonlinear transition function, in our case a feed-forward neural network, and θP are the correspond- ing model parameters. This is a nonlinear autoregressive exogenous model (NARX) (Ljung, 1999). The predictive performance of the model will be important for model pre- dictive control (see Section 3) and for model learning based on the prediction error (Ljung, 1999). To predict future observations Y141 ,, We exploit the de- coder, such that +1)n, = 9(2:41)n,39D)- The deep de- coder g maps features z to high-dimensional observations y parameterized by Op.
1502.02251#9
1502.02251#11
1502.02251
[ "1504.00702" ]
1502.02251#11
From Pixels to Torques: Policy Learning with Deep Dynamical Models
cost function is minimized by the BFGS algorithm (No- cedal & Wright, 2006). Note that in (5a) it is crucial to include not only the prediction error VP, but also the re- construction error VR. Without this term the multi-step ahead prediction performance will decrease because pre- dicted features are not consistent with features achieved from the encoder. Since we consider a control problem in this paper, multi-step ahead predictive performance is cru- cial. Now, we are ready to put the pieces together: With feature prediction model (2) and the deep auto-encoder, the DDM predicts future features and images according to
1502.02251#10
1502.02251#12
1502.02251
[ "1504.00702" ]
1502.02251#12
From Pixels to Torques: Policy Learning with Deep Dynamical Models
zt(θE) = gâ 1(yt; θE), (3a) n+1; θP), (3b) Zr4ajn, (Op, Op) = f (Zt; Wes +s Zn gas Urn $13 OP) Tesrjn,, (Oe, Od, OP) = g(Zr41]h,3 9D), (3b) which is illustrated in Fig. 3.
1502.02251#11
1502.02251#13
1502.02251
[ "1504.00702" ]
1502.02251#13
From Pixels to Torques: Policy Learning with Deep Dynamical Models
With this prediction model we define the prediction error â εP t+1(θE, θD, θP) = yt+1 (4) â Tern, (Ox, 4%, Op), Initialization. With a linear activation function the auto- encoder and PCA are identical (Bourlard & Kamp, 1988), which we exploit to initialize the parameters of the auto- encoder: The auto-encoder network is unfolded, each pair of layers in the encoder and the decoder are combined, and the corresponding PCA solution is computed for each of these pairs. We start with high-dimensional image data at the top layer and use the principal components from that pair of layers as input to the next pair of layers. Thereby, we recursively compute a good initialization for all parameters of the auto-encoder. Similar pre-training routines are found in (Hinton & Salakhutdinov, 2006), in which a restricted Boltzmann machine is used instead of PCA. where yt+1 is the observed image at time t + 1.
1502.02251#12
1502.02251#14
1502.02251
[ "1504.00702" ]
1502.02251#14
From Pixels to Torques: Policy Learning with Deep Dynamical Models
# 2.3. Training The DDM is parameterized by the encoder parameters 6p, the decoder parameters @p and the prediction model param- eters Op. In the DDM, we train both the prediction model and the deep auto-encoder jointly by finding parameters (6, , 6p). such that such that Op) =arg min 8.00 N (6x, 8p, Op) =arg min Va(9g, Op) + Vo(Oz, Op, Op), (Sa) 8.00 N c Vez, 8; 0) = D>, _, ler Oz, 6, 8). (5b) N Va(G, 40) = D2, llet @z, 0) |, (Se) which minimizes the sums of squared reconstruction (1) and prediction (4) errors. We learn all model parameters θE, θD, θP jointly by solv- ing (5a).1 The required gradients with respect to the param- eters are computed efï¬ ciently by back-propagation, and the In this section, we have presented a DDM that facili- tates fast predictions of high-dimensional observations via a low-dimensional embedded time series. The property of fast predictions will be exploited by the online feedback control strategy presented in the following. More details on the proposed model are given in (Wahlstr¨om et al., 2015).
1502.02251#13
1502.02251#15
1502.02251
[ "1504.00702" ]
1502.02251#15
From Pixels to Torques: Policy Learning with Deep Dynamical Models
# 3. Learning Closed-Loop Policies from Images We use the DDM for learning a closed-loop policy by means of nonlinear model predictive control (MPC). We start off by an introduction to classical MPC, before mov- ing on to MPC on images in Section 3.1. MPC ï¬ nds an op- timal sequence of control signals that minimizes a K-step loss function, where K is typically smaller than the full horizon. In general, MPC relies on (a) a reference trajec- 1, . . . , xâ tory xref = xâ K (which can be a constant reference signal) and (b) a dynamics model â Normally when features are used for learning dynamical models, they are first extracted from the data in a pre-processing step by minimizing (5c) with respect to the auto-encoder param- eters 02,4. In a second step, the prediction model parameters Op are estimated based on these features by minimizing (5b) con- ditioned on the estimated 05 and . In our experience, a prob- lem with this approach is that the learned features might have a small reconstruction error, but this representation will not be ideal for learning a transition model. The supplementary material dis- cusses this in more detail. xt+1 = f (xt, ut), (6) which, assuming that the current state is denoted by xo, can be used to compute/predict a state trajectory Z1,...,£« for a given sequence uo,...,WuxKâ
1502.02251#14
1502.02251#16
1502.02251
[ "1504.00702" ]
1502.02251#16
From Pixels to Torques: Policy Learning with Deep Dynamical Models
1 of control signals. Using the dynamics model MPC determines an optimal (open- loop) control sequence ug,...,Uj,_,, such that the pre- dicted trajectory %1,...,2« gets as close to the reference From Pixels to Torques: Policy Learning with Deep Dynamical Models trajectory xref as possible, such that K-1 Up,.+., Uz, â ¬ arg min > |Z, â a |? + Aljuel|?, Uu0o:K-1 i=0 where ||7; â 27||? is a cost associated with the deviation of the predicted state trajectory Zo. ; from the reference tra- jectory ayer, and ||u;||? penalizes the amplitude of the con- trol signals. Note that the predicted £, depends on all pre- vious ug:7â
1502.02251#15
1502.02251#17
1502.02251
[ "1504.00702" ]
1502.02251#17
From Pixels to Torques: Policy Learning with Deep Dynamical Models
1. When the control sequence up,...,Uj_1 is determined, the first control ug is applied to the system. After observing the next state, MPC repeats the entire op- timization and turns the overall policy into a closed-loop (feedback) control strategy. # 3.1. MPC on Images of our MPC formulation lies the DDM, which is used to predict future states (8) from a sequence of control inputs. The quality of the MPC controller is inherently bound to the prediction quality of the dynamical model, which is typical in model-based RL (Schneider, 1997; Schaal, 1997; Deisenroth et al., 2015). To learn models and controllers from scratch, we apply a control scheme that allows us to update the DDM as new data arrives. In particular, we use the MPC controller in an adaptive fashion to gradually improve the model by col- lected data in the feedback loop without any speciï¬ c prior knowledge of the system at hand. Data collection is per- formed in closed-loop (online MPC), and it is divided into multiple sequential trials. After each trial, we add the data of the most recent trajectory to the data set, and the model is re-trained using all data that has been collected so far. We now turn the classical MPC procedure into MPC on im- ages by exploiting some convenient properties of the DDM. The DDM allows us to predict features 21,...,2« based on a sequence of controls uo, ..., ux â
1502.02251#16
1502.02251#18
1502.02251
[ "1504.00702" ]
1502.02251#18
From Pixels to Torques: Policy Learning with Deep Dynamical Models
1. By comparing (6) with (2), we define the state xo as the present and past nâ 1 features and the past n â 1 control inputs, such that â x0 = [z0, . . . , zâ n+1, uâ 1, . . . , uâ n+1]. (8) The DDM computes the present and past features with the encoder zt = gâ 1(yt, θE), such that x0 is known at the current time, which matches the MPC requirement. Our objective is to control the system towards a desired refer- ence image frame yref. This reference frame yref can also be encoded to a corresponding reference feature zref = gâ 1(yref, θE), which results in the MPC objective K-1 Up,-++,Uxâ 1 © arg min > 2: â zreel|? +Alfuell?, (9) uoK-1 4=9 Up,-++,Uxâ 1 © arg min > 2: â zreel|? +Alfuell?, (9) uoK-1 4=9 where x, defined in (8), is the current state. The gradi- ents of the cost function (9) with respect to the control sig- nals uo,...,WKâ 1 are computed in closed form, and we use BFGS to find the optimal sequence of control signals. Note that the objective function depends on uo,...,uKâ 1 not only via the control penalty |||? but also via the fea- ture predictions 21.â 1 of the DDM via (2). Overall, we now have an online MPC algorithm that, given a trained DDM, works indirectly on images by exploiting their feature representation. In the following, we will now turn this into an iterative algorithm that learns predictive models from images and good controllers from scratch. Algorithm 1 Adaptive MPC in feature space Algorithm 1 Adaptive MPC in feature space Follow a random control strategy and record data loop Update DDM with all data collected so far for t = 0 to Nâ 1do Get state x; via auto-encoder uy < â ¬-greedy MPC policy using DDM prediction Apply uj and record data end for end loop
1502.02251#17
1502.02251#19
1502.02251
[ "1504.00702" ]
1502.02251#19
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Simply applying the MPC controller based on a randomly initialized model would make the closed-loop system very likely to converge to a point, which is far away from the desired reference value, due to the poor model that can- not extrapolate well to unseen states. This would in turn imply that no data is collected in unexplored regions, in- cluding the region that we actually are interested in. There are two solutions to this problem: Either we use a proba- bilistic dynamics model as suggested in (Schneider, 1997; Deisenroth et al., 2015) to explicitly account for model un- certainty and the implied natural exploration or we follow an explicit exploration strategy to ensure proper excitation of the system. In this paper, we follow the latter approach. In particular, we choose an e-greedy exploration strategy where the optimal feedback uw at each time step is selected with a probability 1 â ¢, and a random action is selected with probability e. # 3.2. Adaptive MPC for Learning from Scratch We will now turn over to describe how (adaptive) MPC can be used together with our DDM to address the pixels to torques problem and to learn from scratch.
1502.02251#18
1502.02251#20
1502.02251
[ "1504.00702" ]
1502.02251#20
From Pixels to Torques: Policy Learning with Deep Dynamical Models
At the core Algorithm | summarizes our adaptive online MPC scheme. We initialize the DDM with a random trial. We use the learned DDM to find an e-greedy policy using predicted features within MPC. This happens online. The collected data is added to the data set and the DDM is updated after each trial. From Pixels to Torques: Policy Learning with Deep Dynamical Models True video frames Yeo Yer Yer2 Yer3 Vera Yes Yer6 YT Yes Predicted video frames Yerole -Yerrle â -Yerait â Yersie â Yerale â Yersie Yer olt t+sie
1502.02251#19
1502.02251#21
1502.02251
[ "1504.00702" ]
1502.02251#21
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Figure 4. Long-term (up to eight steps) predictive performance of the DDM: True (upper plot) and predicted (lower plot) video frames on test data. # 4. Experimental Results (a) Autoencoder and prediction model In the following, we empirically assess the components of our proposed methodology for autonomous learning from high-dimensional synthetic image data: (a) the quality of the learned DDM and (b) the overall learning framework. In both cases, we consider a sequence of images (51 51 = 2601 pixels) and a control input associated with these im- ages. Each pixel y(i) is a component of the measurement t R2601 and assumes a continuous gray-value in the in- yt terval [0, 1]. No access to the underlying dynamics or the state (angle Ï and angular velocity Ë Ï ) was available, i.e., we are dealing with a high-dimensional continuous state space. The challenge was to learn (a) a good dynamics model (b) a good controller from pixel information only. We used a sampling frequency of 0.2 s and a time horizon of 25 s, which corresponds to 100 frames per trial. The input dimension has been reduced to dim(yt) = 50 prior to model learning using PCA. With these 50- dimensional inputs, a four-layer auto-encoder network was used with dimension 50-25-12-6-2, such that the features were of dimension dim(zt) = 2, which is optimal to model the periodic angle of the pendulum. The order of the dy- namics was selected to be n = 2 (i.e., we consider two consecutive image frames) to capture velocity information, such that zt+1 = f (zt, ut, ztâ
1502.02251#20
1502.02251#22
1502.02251
[ "1504.00702" ]
1502.02251#22
From Pixels to Torques: Policy Learning with Deep Dynamical Models
1, utâ 1). For the prediction model f we used a feedforward neural network with a 6-4- 2 architecture. Note that the dimension of the ï¬ rst layer is given by n(dim(zt) + dim(ut)) = 2(2 + 1) = 6. # (b) Only auto-encoder Figure 5. Feature space for both joint (a) and sequential training (b) of auto-encoder and prediction model. The feature space is divided into grid points. For each grid point the decoded high- dimensional image is displayed and the feature values for the training data (red) and validation data (yellow) are overlain. For the joint training the feature values reside on a two-dimensional manifold that corresponds to the two-dimensional position of the tile. For the separate training the feature values are scattered with- out structure.
1502.02251#21
1502.02251#23
1502.02251
[ "1504.00702" ]
1502.02251#23
From Pixels to Torques: Policy Learning with Deep Dynamical Models
# 4.1. Learning Predictive Models from Pixels To assess the predictive performance of the DDM, we took 601 screenshots of a moving tile, see Fig. 4. The control inputs are the (random) increments in position in horizontal and vertical directions. We evaluate the performance of the learned DDM in terms of long-term predictions, which play a central role in MPC for autonomous learning. Long-term predictions are ob- tained by concatenating multiple 1-step ahead predictions. The performance of the DDM is illustrated in Fig. 4 on a test data set. The top row shows the ground truth images and the bottom row shows the DDMâ s long-term predic- tions. The model predicts future frames of the tile with high accuracy both for 1-step ahead and multiple steps ahead. The model yields a good predictive performance for both one-step ahead prediction and multiple-step ahead predic- tion. In Fig. 5(a), the feature representation of the data is dis- played. The features reside on a two-dimensional manifold that encodes the two-dimensional position of the moving From Pixels to Torques: Policy Learning with Deep Dynamical Models Ist trial 4th trial 7th trial Angle [rad] Angle [rad] Time [s] Time [s] Time [s] Figure 7. Control performance after 1st to 15th trial evaluated with ε = 0 for 16 different experiments. The objective was to reach an angle of ±Ï
1502.02251#22
1502.02251#24
1502.02251
[ "1504.00702" ]
1502.02251#24
From Pixels to Torques: Policy Learning with Deep Dynamical Models
. Figure 6. The feature space z â [â 1, 1] Ã [â 1, 1] is divided into 9 Ã 9 grid points for illustration purposes. For each grid point the decoded high-dimensional image is displayed. Green: Feature values that correspond to collected experience in previous trials. Cyan: Feature value that corresponds to the current time step. Red: Desired reference value. Yellow: 15-steps-ahead prediction after optimizing for the optimal control inputs. tile. By inspecting the decoded images we can see that each corner of the manifold corresponds to a corner po- sition of the tile.
1502.02251#23
1502.02251#25
1502.02251
[ "1504.00702" ]
1502.02251#25
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Due to this structure a relatively simple prediction model is sufï¬ cient to describe the dynamics. In case the auto-encoder and the prediction model would have been learned sequentially (ï¬ rst training the auto-encoder, and then based on these features values train the predic- tion model) such a structure would not have been enforced. In Fig. 5(b) the corresponding feature representation is displayed where only the auto-encoder has been trained. Clearly, these features does not exhibit such a structure. # 4.2.
1502.02251#24
1502.02251#26
1502.02251
[ "1504.00702" ]
1502.02251#26
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Closed-Loop Policy Learning from Pixels the DDM using all collected data so far, where we also in- clude the reference image while learning the auto-encoder. Fig. 6 displays the decoded images corresponding to 1, 1]2. The learned fea- learned latent representations in [ ture values of the training data (green) line up in a circular shape, such that a relatively simple prediction model is suf- ï¬ cient to describe the dynamics. If we would not have opti- mized for both the prediction error and reconstruction error, such an advantageous structure of the feature values would not have been obtained. The DDM extracts features that can also model the dynamic behavior compactly.
1502.02251#25
1502.02251#27
1502.02251
[ "1504.00702" ]
1502.02251#27
From Pixels to Torques: Policy Learning with Deep Dynamical Models
The ï¬ gure also shows the predictions produced by the MPC controller (yellow), starting from the current time step (cyan) and tar- geting the reference feature (red) where the pendulum is in the target position. To assess the controller performance after each trial, we applied a greedy policy (â ¬ = 0). In Fig. 7, angle trajectories for 15 of the 50 experiments at different learning stages are displayed. In the first trial, the controller managed only ina few cases to drive the pendulum toward the reference value +t. The control performance increased gradually with the number of trials, and after the 15th trial, it manages in most cases to get it to an upright position. In this section, we report results on learning a policy that moves a pendulum (1-link robot arm with length 1m, weight | kg and friction coefficient 1 Nsm/rad) from a start position y = 0 to a target position y = +7. The reference signal was the screenshot of the pendulum in the target po- sition. For the MPC controller, we used a planning horizon of P = 15 steps and a control penalty \ = 0.01. For the e-greedy exploration strategy we used â ¬ = 0.2. We con- ducted 50 independent experiments with different random initializations. The learning algorithm was run for 15 trials (plus an initial random trial). After each trial, we retrained To assess the data efï¬ ciency of our approach, we compared it with the PILCO RL framework (Deisenroth et al., 2015) to learning closed-loop control policies for the pendulum task above. PILCO is a current state-of-the art model-based RL algorithm for data-efï¬ cient learning of control policies in continuous state-control spaces. Using collected data PILCO learns a probabilistic model of the system dynam- ics, implemented as a Gaussian process (GP) (Rasmussen & Williams, 2006). Subsequently, this model is used to compute a distribution over trajectories and the correspond- From Pixels to Torques: Policy Learning with Deep Dynamical Models 1 0.8 e t a R s s e c c u S 0.6 0.4 0.2 PILCO w/ 2D state (Ï , Ë
1502.02251#26
1502.02251#28
1502.02251
[ "1504.00702" ]
1502.02251#28
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Ï ) PILCO w/ 2D AE features PILCO w/ 20D PCA features DDM+MPC 0 0 500 1,000 1,500 separately. The auto-encoder ï¬ nds good features that min- imize the reconstruction error. However, these features are not good for modeling the dynamic behavior of the sys- tem,3 and lead to bad long-term predictions. Computation times of PILCO and our method are vastly different: While PILCO spends most time optimizing pol- icy parameters, our model spends most of the time on learn- ing the DDM. Computing the optimal nonparametric MPC policy happens online and does not require signiï¬ cant com- putational overhead. To put this into context, PILCO re- quired a few days of learning time for 10 trials (in a 20D feature space). In a 2D feature space, running PILCO for 10 trials and 1000 data points requires about 10 hours. # Number of frames (100 per trial) Figure 8. Average learning success with standard errors. Blue: PILCO ground-truth RL baseline using the true state (Ï , Ë Ï ). Red:
1502.02251#27
1502.02251#29
1502.02251
[ "1504.00702" ]
1502.02251#29
From Pixels to Torques: Policy Learning with Deep Dynamical Models
PILCO with learned auto-encoder features from image pixels. Cyan: PILCO on 20D feature determined by PCA. Black: Our proposed MPC solution using the DDM. ing expected cost, which is used for gradient-based opti- mization of the controller parameters. Although PILCO uses data very efï¬ ciently, its computa- tional demand makes its direct application impractical for 20 D) problems, many data points or high-dimensional ( such that we had to make suitable adjustments to apply PILCO to the pixels-to-torques problem. In particular, we performed the following experiments: (1) PILCO applied to 20D PCA features, (2) PILCO applied to 2D features learned by deep auto-encoders, (3) An optimal baseline where we applied PILCO to the standard RL setting with access to the â
1502.02251#28
1502.02251#30
1502.02251
[ "1504.00702" ]
1502.02251#30
From Pixels to Torques: Policy Learning with Deep Dynamical Models
trueâ state (Ï , Ë Ï ) (Deisenroth et al., 2015). Overall, our DDM+MPC approach to learning closed-loop policies from high-dimensional observations exploits the learned Deep Dynamical Model to learn good policies fairly data efï¬ ciently. # 5. Conclusion We have proposed a data-efï¬ cient model-based RL algo- rithm that learns closed-loop policies in continuous state and action spaces directly from pixel information. The key components of our solution are (1) a deep dynamical model (DDM) that is used for long-term predictions in a compact feature space and (2) an MPC controller that uses the pre- dictions of the DDM to determine optimal actions on the ï¬ y without the need for value function estimation. For the suc- cess of this RL algorithm it is crucial that the DDM learns the feature mapping and the predictive model in feature space jointly to capture dynamic behavior for high-quality long-term predictions. Compared to state-of-the-art RL our algorithm learns fairly quickly, scales to high-dimensional state spaces and facilitates learning from pixels to torques.
1502.02251#29
1502.02251#31
1502.02251
[ "1504.00702" ]
1502.02251#31
From Pixels to Torques: Policy Learning with Deep Dynamical Models
# Acknowledgments Fig. 8 displays the average success rate of PILCO (in- cluding standard error) and our proposed method using deep dynamical models together with a tailored MPC (DDM+MPC). We deï¬ ne â successâ if the pendulumâ s an- gle is stabilized within 10â ¦ around the target state.2 The baseline (PILCO trained on the ground-truth 2D state (Ï , Ë Ï )) is shown in blue and solves the task very quickly. The graph shows that our proposed algorithm (black), which learns torques directly from pixels, is not too far behind the ground-truth RL solution, achieving a n almost 90% success rate after 15 trials (1500 image frames). How- ever, PILCO trained on the 2D auto-encoder features (red) and 20D PCA features fail consistently in all experiments We explain PILCOâ s failure by the fact that we trained the auto-encoder and the transition dynamics in feature space 2Since we consider a continuous setting, we have to deï¬ ne a target region. This work was supported by the Swedish Foundation for Strategic Research under the project Cooperative Localiza- tion and the Swedish Research Council under the project Probabilistic modeling of dynamical systems (Contract number: 621-2013-5524). MPD was supported by an Im- perial College Junior Research Fellowship. # References Abramova, Ekatarina, Dickens, Luke, Kuhn, Daniel, and Faisal, A. Aldo.
1502.02251#30
1502.02251#32
1502.02251
[ "1504.00702" ]
1502.02251#32
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Hierarchical, heterogeneous control us- ing reinforcement learning. In EWRL, 2012. 3When we inspected the latent-space embedding of the auto- encoder, the pendulum angles do not nicely line up along an â easyâ manifold as in Fig. 6. See supplementary material for more details. From Pixels to Torques: Policy Learning with Deep Dynamical Models Atkeson, Christopher G. and Schaal, S. Learning tasks from a single demonstration. In ICRA, 1997. LeCun, Y, Bottou, L, Bengio, Y, and Haffner, P. Gradient- based learning applied to document recognition. Proc. of the IEEE, 86(11):2278â 2324, 1998. Bagnell, James A. and Schneider, Jeff G.
1502.02251#31
1502.02251#33
1502.02251
[ "1504.00702" ]
1502.02251#33
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Autonomous helicopter control using reinforcement learning policy search methods. In ICRA, 2001. Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015. Bengio, Yoshua, Lamblin, Pascal, Popovici, Dan, and Larochelle, Hugo. Greedy layer-wise training of deep networks. In NIPS, 2007.
1502.02251#32
1502.02251#34
1502.02251
[ "1504.00702" ]
1502.02251#34
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Ljung, L. System Identiï¬ cation: Theory for the User. Pren- tice Hall, 1999. Boedecker, Joschka, Springenberg, Jost Tobias, W¨ulï¬ ng, Jan, and Riedmiller, Martin. Approximate real-time op- timal control based on sparse Gaussian process models. In ADPRL, 2014. Boots, Byron, Byravan, Arunkumar, and Fox, Dieter.
1502.02251#33
1502.02251#35
1502.02251
[ "1504.00702" ]
1502.02251#35
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Learning predictive models of a depth camera & manip- ulator from raw execution traces. In ICRA, 2014. Mayne, David Q. Model predictive control: Recent devel- opments and future promise. Automatica, 50(12):2967â 2986, 2014. Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare, Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, and et al. Human-level control Nature, 518 through deep reinforcement (7540):529â 533, 2015. Bourlard, Herv´e and Kamp, Yves.
1502.02251#34
1502.02251#36
1502.02251
[ "1504.00702" ]
1502.02251#36
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Auto-association by multilayer perceptrons and singular value decomposi- tion. Biological Cybernetics, 59(4-5):291â 294, 1988. Nocedal, J. and Wright, S. J. Numerical Optimization. Springer, 2006. Brock, Oliver. Berlin Summit on Robotics: Conference Re- port, chapter Is Robotics in Need of a Paradigm Shift?, pp. 1â 10. 2011. Contardo, Gabriella, Denoyer, Ludovic, Artieres, Thierry, and Gallinari, Patrick.
1502.02251#35
1502.02251#37
1502.02251
[ "1504.00702" ]
1502.02251#37
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Learning states representations in POMDP. arXiv preprint arXiv:1312.6042, 2013. Cuccu, Giuseppe, Luciw, Matthew, Schmidhuber, J¨urgen, and Gomez, Faustino. Intrinsically motivated neuroevo- lution for vision-based reinforcement learning. In ICDL, 2011. Pan, Yunpeng and Theodorou, Evangelos. Probabilistic dif- ferential dynamic programming. In NIPS, 2014. Rasmussen, Carl E. and Williams, Christopher K. I. Gaus- sian Processes for Machine Learning. The MIT Press, 2006. Schaal, Stefan.
1502.02251#36
1502.02251#38
1502.02251
[ "1504.00702" ]
1502.02251#38
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Learning from demonstration. In NIPS. 1997. Schmidhuber, J¨urgen. An on-line algorithm for dynamic reinforcement learning and planning in reactive environ- ments. In IJCNN, 1990. Deisenroth, Marc P., Rasmussen, Carl E., and Peters, Jan. Gaussian process dynamic programming. Neurocomput- ing, 72(7â 9):1508â 1524, 2009. Deisenroth, Marc P., Fox, Dieter, and Rasmussen, Carl E.
1502.02251#37
1502.02251#39
1502.02251
[ "1504.00702" ]
1502.02251#39
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Gaussian processes for data-efï¬ cient learning in robotics and control. IEEE-TPAMI, 37(2):408â 423, 2015. Hinton, G and Salakhutdinov, R. Reducing the dimension- ality of data with neural networks. Science, 313:504â 507, 2006. Koutnik, Jan, Cuccu, Giuseppe, Schmidhuber, J¨urgen, and Gomez, Faustino. Evolving large-scale neural networks In GECCO, for vision-based reinforcement learning. 2013. Schneider, Jeff G. Exploiting model uncertainty estimates for safe dynamic control learning. In NIPS. 1997.
1502.02251#38
1502.02251#40
1502.02251
[ "1504.00702" ]
1502.02251#40
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Sha, Daohang. A new neural networks based adaptive model predictive control for unknown multiple variable non-linear systems. IJAMS, 1(2):146â 155, 2008. Sutton, Richard S. and Barto, Andrew G. Reinforcement Learning: An Introduction. The MIT Press, 1998. van Hoof, Herke, Peters, Jan, and Neumann, Gerhard. Learning of non-parametric control policies with high- dimensional state features. In AISTATS, 2015. Vincent, P, Larochelle, H, Bengio, Y, and Manzagol, Pierre-Antoine. Extracting and composing robust fea- tures with denoising autoencoders. In ICML, 2008. Lange, Sascha, Riedmiller, Martin, and Voigtl¨ander, Arne. Autonomous reinforcement learning on raw visual input data in a real-world application.
1502.02251#39
1502.02251#41
1502.02251
[ "1504.00702" ]
1502.02251#41
From Pixels to Torques: Policy Learning with Deep Dynamical Models
In IJCNN, 2012. Wahlstr¨om, Niklas, Sch¨on, Thomas B., and Deisenroth, Marc P. Learning deep dynamical models from image pixels. In SYSID, 2015.
1502.02251#40
1502.02251
[ "1504.00702" ]